feature bit.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/virtio/virtio_ring.c | 75 +---
include/linux/virtio.h | 14
include/uapi/linux/virtio_ring.h | 5 ++-
3 files changed
On 09/22/2014 02:55 PM, Michael S. Tsirkin wrote:
On Mon, Sep 22, 2014 at 11:30:23AM +0800, Jason Wang wrote:
On 09/20/2014 06:00 PM, Paolo Bonzini wrote:
Il 19/09/2014 09:10, Jason Wang ha scritto:
-if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) {
+if (vq-urgent
On 09/20/2014 06:00 PM, Paolo Bonzini wrote:
Il 19/09/2014 09:10, Jason Wang ha scritto:
- if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) {
+ if (vq-urgent || !vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) {
So the urgent descriptor only work when event index was not enabled
On 07/01/2014 06:49 PM, Michael S. Tsirkin wrote:
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
drivers/vhost/vhost.h | 19 +--
drivers/vhost/net.c | 30 +-
drivers/vhost/scsi.c | 23 +++
drivers/vhost/test.c | 5
On 07/01/2014 06:49 PM, Michael S. Tsirkin wrote:
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
drivers/vhost/vhost.h | 19 +--
drivers/vhost/net.c | 30 +-
drivers/vhost/scsi.c | 23 +++
drivers/vhost/test.c | 5
and for guest who has a bad irq detection
routine ( such
as note_interrupt() in linux ), this bad irq would be recognized soon
as in the
past.
Signed-off-by: Jason Wang jasowang at redhat.com
---
virt/kvm/ioapic.c | 47
+--
virt
On 08/27/2014 05:31 PM, Zhang Haoyu wrote:
Hi, all
I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter
e1000 NIC interrupt storm,
because if (!ent-fields.mask (ioapic-irr (1 i))) is
always true in __kvm_ioapic_update_eoi().
Any ideas?
We meet this several times:
routine (
such
as note_interrupt() in linux ), this bad irq would be recognized soon as in
the
past.
Signed-off-by: Jason Wang jasowang at redhat.com
---
virt/kvm/ioapic.c | 47 +--
virt/kvm/ioapic.h |2 ++
2 files changed, 47 insertions
On 08/29/2014 12:07 PM, Zhang, Yang Z wrote:
Zhang Haoyu wrote on 2014-08-29:
Hi, Yang, Gleb, Michael,
Could you help review below patch please?
I don't quite understand the background. Why ioacpi-irr is setting before
EOI? It should be driver's responsibility to clear the interrupt before
On 08/26/2014 05:28 PM, Zhang Haoyu wrote:
Hi, all
I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
interrupt storm,
because if (!ent-fields.mask (ioapic-irr (1 i))) is always
true in __kvm_ioapic_update_eoi().
Any ideas?
We meet this several times: search
On 08/25/2014 03:17 PM, Zhang Haoyu wrote:
Hi, all
I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
interrupt storm,
because if (!ent-fields.mask (ioapic-irr (1 i))) is always true
in __kvm_ioapic_update_eoi().
Any ideas?
We meet this several times: search
On 08/25/2014 03:17 PM, Zhang Haoyu wrote:
Hi, all
I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
interrupt storm,
because if (!ent-fields.mask (ioapic-irr (1 i))) is always
true in __kvm_ioapic_update_eoi().
Any ideas?
We meet this several times:
On 08/23/2014 06:36 PM, Zhang Haoyu wrote:
Hi, all
I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
interrupt storm,
because if (!ent-fields.mask (ioapic-irr (1 i))) is always true
in __kvm_ioapic_update_eoi().
Any ideas?
We meet this several times: search
On 08/22/2014 10:30 AM, Zhang Haoyu wrote:
Hi, Krishna, Shirley
How got get the latest patch of M:N Implementation of mulitiqueue,
I am going to test the the combination of M:N Implementation of mulitiqueue
and vhost: add polling mode.
Thanks,
Zhang Haoyu
Just FYI. You may refer
On 08/17/2014 06:20 PM, Michael S. Tsirkin wrote:
On Fri, Aug 15, 2014 at 11:40:08AM +0800, Jason Wang wrote:
After rx vq was enabled, we never stop polling its socket. This is sub
optimal
when may lead unnecessary wake-ups after the rx net work has already been
queued. This could
On 08/17/2014 06:22 PM, Michael S. Tsirkin wrote:
On Fri, Aug 15, 2014 at 10:55:32AM +0800, Jason Wang wrote:
I wonder if k-set_guest_notifiers should be called after hdev-started
= true; in vhost_dev_start.
Michael, can we just remove those assertions? Since you may want to set
guest
On 08/07/2014 08:47 PM, Zhangjie (HZ) wrote:
On 2014/8/5 20:14, Zhangjie (HZ) wrote:
On 2014/8/5 17:49, Michael S. Tsirkin wrote:
On Tue, Aug 05, 2014 at 02:29:28PM +0800, Zhangjie (HZ) wrote:
Jason is right, the new order is not the cause of network unreachable.
Changing order seems not
On 08/14/2014 06:02 PM, Michael S. Tsirkin wrote:
On Thu, Aug 14, 2014 at 04:52:40PM +0800, Jason Wang wrote:
On 08/07/2014 08:47 PM, Zhangjie (HZ) wrote:
On 2014/8/5 20:14, Zhangjie (HZ) wrote:
On 2014/8/5 17:49, Michael S. Tsirkin wrote:
On Tue, Aug 05, 2014 at 02:29:28PM +0800, Zhangjie
/normalized thru/
256/1/+1.9004%-4.7985% +7.0366%
256/25/-4.7366% -11.0809%+7.1349%
256/50/+3.9808% -5.2037% +9.6887%
4096/1/+2.1619% -0.7303% +2.9134%
4096/25/-13.1836% -14.7298%+1.8134%
4096/50/-11.1990% -15.4763%+5.0605%
Signed-off-by: Jason Wang jasow...@redhat.com
On 07/23/2014 04:12 PM, Razya Ladelsky wrote:
Jason Wang jasow...@redhat.com wrote on 23/07/2014 08:26:36 AM:
From: Jason Wang jasow...@redhat.com
To: Razya Ladelsky/Haifa/IBM@IBMIL, kvm@vger.kernel.org, Michael S.
Tsirkin m...@redhat.com,
Cc: abel.gor...@gmail.com, Joel Nider/Haifa/IBM
On 07/23/2014 04:48 PM, Abel Gordon wrote:
On Wed, Jul 23, 2014 at 11:42 AM, Jason Wang jasow...@redhat.com wrote:
On 07/23/2014 04:12 PM, Razya Ladelsky wrote:
Jason Wang jasow...@redhat.com wrote on 23/07/2014 08:26:36 AM:
From: Jason Wang jasow...@redhat.com
To: Razya Ladelsky
On 07/21/2014 09:23 PM, Razya Ladelsky wrote:
Hello All,
When vhost is waiting for buffers from the guest driver (e.g., more
packets
to send in vhost-net's transmit queue), it normally goes to sleep and
waits
for the guest to kick it. This kick involves a PIO in the guest, and
therefore
On Thu, 2014-04-10 at 17:27 +0800, Fam Zheng wrote:
On Fri, 03/21 17:41, Jason Wang wrote:
This patch adds simple python to display vhost satistics of vhost, the codes
were based on kvm_stat script from qemu. As work function has been recored,
filters could be used to distinguish which
On Tue, 2014-04-08 at 16:49 -0400, Simon Chen wrote:
A little update on this..
I turned on multiqueue of vhost-net. Now the receiving VM is getting
traffic over all four queues - based on the CPU usage of the four
vhost-[pid] threads. For some reason, the sender is now pegging 100%
on one
) 707 0
vhost_work_queue_wakeup(rx_kick) 9 0
Signed-off-by: Jason Wang jasow...@redhat.com
---
tools/virtio/vhost_stat | 375
1 file changed, 375 insertions(+)
create mode 100755 tools/virtio/vhost_stat
diff
To help performance analyze and debugging, this patch introduces
tracepoints for vhost_net. Two tracepoints were introduced, packets
sending and receiving.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 5 +
drivers/vhost/net_trace.h | 53
To help for the performance optimizations and debugging, this patch tracepoints
for vhost. Two kinds of activities were traced: virtio and vhost work
queuing/wakeup.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 1 +
drivers/vhost/trace.h | 175
/478
Jason Wang (4):
vhost: introduce queue_index for tracing
vhost: basic tracepoints
vhost_net: add basic tracepoints for vhost_net
tools: virtio: add a top-like utility for displaying vhost satistics
drivers/vhost/net.c | 7 +
drivers/vhost/net_trace.h | 53 +++
drivers
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 1 +
drivers/vhost/vhost.h | 3 +++
2 files changed, 4 insertions(+)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a0fa5de..85d666c 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -708,6 +708,7
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too
On 03/08/2014 05:39 AM, David Miller wrote:
From: Jason Wang jasow...@redhat.com
Date: Fri, 7 Mar 2014 13:28:27 +0800
This is because the delay added by htb may lead the delay the finish
of DMAs and cause the pending DMAs for tap0 exceeds the limit
(VHOST_MAX_PEND). In this case vhost stop
when unlimited sndbuf. We still need a
solution for limited sndbuf.
Cc: Michael S. Tsirkin m...@redhat.com
Cc: Qin Chuanyu qinchua...@huawei.com
Signed-off-by: Jason Wang jasow...@redhat.com
---
Changes from V1:
- Remove VHOST_MAX_PEND and switch to use half of the vq size as the limit
- Add cpu
On 02/26/2014 07:16 PM, Michael S. Tsirkin wrote:
Please see MAINTAINERS and copy all relevant lists.
On Wed, Feb 26, 2014 at 05:20:09PM +0800, Qin Chuanyu wrote:
guest kick host base on avail_ring flags value and get perfermance
typo
improved, vhost_zerocopy_callback could do the same
On 02/26/2014 05:23 PM, Michael S. Tsirkin wrote:
On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote:
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
On 2014/2/26 13:53, Jason Wang wrote:
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason
On 02/25/2014 03:53 PM, Qin Chuanyu wrote:
On 2014/2/25 15:38, Jason Wang wrote:
On 02/25/2014 02:55 PM, Qin Chuanyu wrote:
guest kick vhost base on vring flag status and get perfermance
improved,
vhost_zerocopy_callback could do this in the same way, as
virtqueue_enable_cb need one more
On 02/25/2014 04:56 PM, Qin Chuanyu wrote:
On 2014/2/25 16:13, Jason Wang wrote:
On 02/25/2014 03:53 PM, Qin Chuanyu wrote:
On 2014/2/25 15:38, Jason Wang wrote:
On 02/25/2014 02:55 PM, Qin Chuanyu wrote:
guest kick vhost base on vring flag status and get perfermance
improved
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive
- Original Message -
guest kick vhost base on vring flag status and get perfermance improved,
vhost_zerocopy_callback could do this in the same way, as
virtqueue_enable_cb need one more check after change the status of
avail_ring flags, vhost also do the same thing after
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
On 2014/2/26 13:53, Jason Wang wrote:
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND
On 02/24/2014 09:12 PM, Qin Chuanyu wrote:
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has been solved in
tun, it also been needed by bridge. This could easily happened when a
LAST_ACK state tcp occuring between guest
: Michael S. Tsirkin m...@redhat.com
Cc: Qin Chuanyu qinchua...@huawei.com
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a0fa5de..3e96e47 100644
On 02/25/2014 02:55 PM, Qin Chuanyu wrote:
guest kick vhost base on vring flag status and get perfermance improved,
vhost_zerocopy_callback could do this in the same way, as
virtqueue_enable_cb need one more check after change the status of
avail_ring flags, vhost also do the same thing after
*/
+ synchronize_rcu_bh();
/* We do an extra flush before freeing memory,
* since jobs can re-queue themselves. */
vhost_net_flush(n);
Acked-by: Jason Wang jasow...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
);
mutex_lock(n-vqs[VHOST_NET_VQ_TX].vq.mutex);
n-tx_flush = false;
- kref_init(n-vqs[VHOST_NET_VQ_TX].ubufs-kref);
+ atomic_set(n-vqs[VHOST_NET_VQ_TX].ubufs-refcount, 1);
mutex_unlock(n-vqs[VHOST_NET_VQ_TX].vq.mutex);
}
}
Acked-by: Jason
On 02/12/2014 03:38 PM, Qin Chuanyu wrote:
On 2013/8/30 12:29, Jason Wang wrote:
We used to poll vhost queue before making DMA is done, this is racy
if vhost
thread were waked up before marking DMA is done which can result the
signal to
be missed. Fix this by always poll the vhost thread
On 02/11/2014 10:25 PM, Qin Chuanyu wrote:
we could xmit directly instead of going through softirq to gain
throughput and lantency improved.
test model: VM-Host-Host just do transmit. with vhost thread and nic
interrupt bind cpu1. netperf do throuhput test and qperf do lantency
test.
Host
On 02/12/2014 01:47 PM, Eric Dumazet wrote:
On Wed, 2014-02-12 at 13:28 +0800, Jason Wang wrote:
A question: without NAPI weight, could this starve other net devices?
Not really, as net devices are serviced by softirq handler.
Yes, then the issue is tun could be starved by other net devices
)
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 9a68409..06268a0 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost
On 02/12/2014 02:26 PM, Eric Dumazet wrote:
On Wed, 2014-02-12 at 13:50 +0800, Jason Wang wrote:
On 02/12/2014 01:47 PM, Eric Dumazet wrote:
On Wed, 2014-02-12 at 13:28 +0800, Jason Wang wrote:
A question: without NAPI weight, could this starve other net devices?
Not really, as net devices
On 02/12/2014 02:46 PM, Qin Chuanyu wrote:
On 2014/2/12 13:28, Jason Wang wrote:
A question: without NAPI weight, could this starve other net devices?
tap xmit skb use thread context,the poll func of physical nic driver
could be called in softirq context without change.
I had test
On 01/22/2014 11:22 PM, Stefan Hajnoczi wrote:
On Tue, Jan 21, 2014 at 04:06:05PM -0200, Alejandro Comisario wrote:
CCed Michael Tsirkin and Jason Wang who work on KVM networking.
Hi guys, we had in the past when using physical servers, several
throughput issues regarding the throughput
:
On Tue, Jan 21, 2014 at 04:06:05PM -0200, Alejandro Comisario wrote:
CCed Michael Tsirkin and Jason Wang who work on KVM networking.
Hi guys, we had in the past when using physical servers, several
throughput issues regarding the throughput of our APIS, in our case we
measure this with packets
On 12/17/2013 11:09 AM, Rusty Russell wrote:
Jason Wang jasow...@redhat.com writes:
On 10/28/2013 04:01 PM, Asias He wrote:
vqs are freed in virtscsi_freeze but the hotcpu_notifier is not
unregistered. We will have a use-after-free usage when the notifier
callback is called after
affinity when doing cpu hotplug)
Cc: sta...@vger.kernel.org
Signed-off-by: Asias He asias.he...@gmail.com
Reviewed-by: Paolo Bonzini pbonz...@redhat.com
Signed-off-by: Jason Wang jasow...@redhat.com
---
Changes from V1:
- Add Fixes line
- CC stable
---
drivers/scsi/virtio_scsi.c | 15 ++-
1
On 10/28/2013 04:01 PM, Asias He wrote:
vqs are freed in virtscsi_freeze but the hotcpu_notifier is not
unregistered. We will have a use-after-free usage when the notifier
callback is called after virtscsi_freeze.
Signed-off-by: Asias He as...@redhat.com
---
drivers/scsi/virtio_scsi.c | 15
On 11/24/2013 05:22 PM, Razya Ladelsky wrote:
Hi all,
I am Razya Ladelsky, I work at IBM Haifa virtualization team, which
developed Elvis, presented by Abel Gordon at the last KVM forum:
ELVIS video: https://www.youtube.com/watch?v=9EyweibHfEs
ELVIS slides:
On 11/03/2013 04:07 PM, wangsitan wrote:
Hi all,
A virtual net interface using virtio_net with TSO on may send big TCP packets
(up to 64KB). The receiver will get big packets if it's virtio_net, too. But
it will get common packets (according to MTU) if the receiver is e1000 (who
On 11/04/2013 12:35 PM, Jason Wang wrote:
On 11/03/2013 04:07 PM, wangsitan wrote:
Hi all,
A virtual net interface using virtio_net with TSO on may send big TCP
packets (up to 64KB). The receiver will get big packets if it's virtio_net,
too. But it will get common packets (according to MTU
;
+
+ err = register_hotcpu_notifier(vscsi-nb);
+ if (err)
+ vdev-config-del_vqs(vdev);
- return virtscsi_init(vdev, vscsi);
+ return err;
}
#endif
Acked-by: Jason Wang jasow...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
On 10/20/2013 04:04 PM, Sahid Ferdjaoui wrote:
Hi all,
I'm working on create a large number of tcp connections on a guest;
The environment is on OpenStack:
Host (dedicated compute node):
OS/Kernel: Ubuntu/3.2
Cpus: 24
Mems: 128GB
Guest (alone on the Host):
OS/Kernel: Ubuntu/3.2
On 09/26/2013 12:30 PM, Jason Wang wrote:
On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote:
On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote:
On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote:
On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote:
Currently, even
On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote:
On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote:
On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote:
On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote:
Currently, even if the packet length is smaller than
On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote:
On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote:
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this
choice
later. This could
On 09/04/2013 07:59 PM, Daniel Borkmann wrote:
On 09/04/2013 01:27 PM, Eric Dumazet wrote:
On Wed, 2013-09-04 at 03:30 -0700, Eric Dumazet wrote:
On Wed, 2013-09-04 at 14:30 +0800, Jason Wang wrote:
And tcpdump would certainly help ;)
See attachment.
Nothing obvious on tcpdump (only
On 09/02/2013 01:50 PM, Michael S. Tsirkin wrote:
On Fri, Aug 30, 2013 at 12:29:18PM +0800, Jason Wang wrote:
We tend to batch the used adding and signaling in vhost_zerocopy_callback()
which may result more than 100 used buffers to be updated in
vhost_zerocopy_signal_used() in some cases
On 09/02/2013 01:51 PM, Michael S. Tsirkin wrote:
tweak subj s/returns/return/
On Fri, Aug 30, 2013 at 12:29:17PM +0800, Jason Wang wrote:
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang jasow...@redhat.com
---
Will correct it in v3
On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote:
On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
As Michael point out, We used to limit the max pending DMAs to get better
cache
utilization. But it was not done correctly since it was one done when
there's no
new buffers
On 09/02/2013 02:30 PM, Jason Wang wrote:
On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote:
On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
As Michael point out, We used to limit the max pending DMAs to get
better cache
utilization. But it was not done correctly since
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always polling the vhost thread before DMA is done.
Signed-off-by: Jason Wang jasow...@redhat.com
---
- The patch
Let vhost_add_used() to use vhost_add_used_n() to reduce the code
duplication. To avoid the overhead brought by __copy_to_user(). We will use
put_user() when one used need to be added.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/vhost.c | 54
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determining zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang jasow
into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 18 +++---
1 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index
much less times of used index
updating and memory barriers.
2% performance improvement were seen on netperf TCP_RR test.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 13 -
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/net.c b
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers
based on Michael's suggestion.
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() return void
vhost_net: use vhost_add_used_and_signal_n() in
vhost_zerocopy_signal_used()
vhost: switch to use vhost_add_used_n()
vhost_net: determine whether or not to use zerocopy at one time
On 08/31/2013 12:44 AM, Ben Hutchings wrote:
On Fri, 2013-08-30 at 12:29 +0800, Jason Wang wrote:
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal
to
be missed. Fix this by always poll
On 08/31/2013 02:35 AM, Sergei Shtylyov wrote:
Hello.
On 08/30/2013 08:29 AM, Jason Wang wrote:
Currently, even if the packet length is smaller than
VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback
this choice
later. This could be avoided
On 08/31/2013 12:45 PM, Qin Chuanyu wrote:
On 2013/8/30 0:08, Anthony Liguori wrote:
Hi Qin,
By change the memory copy and notify mechanism ,currently
virtio-net with
vhost_net could run on Xen with good performance。
I think the key in doing this would be to implement a property
ioeventfd
On 08/25/2013 07:53 PM, Michael S. Tsirkin wrote:
On Fri, Aug 23, 2013 at 04:55:49PM +0800, Jason Wang wrote:
On 08/20/2013 10:48 AM, Jason Wang wrote:
On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
We used to limit the max
into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx. But a 5% drop on per cpu
transaction rate for a single session TCP_RR.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 15 ---
1 files changed, 4 insertions(+), 11 deletions
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determine zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang jasow
much more less times of used index
updating and memory barriers.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 13 -
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 280ee66..8a6dd0d 100644
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/vhost.c | 54 ++--
1 files changed, 12 insertions(+), 42 deletions(-)
diff --git a/drivers/vhost/vhost.c b
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always poll the vhost thread before DMA is done.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers
!= done_idx
to (upend_idx + 1) % UIO_MAXIOV == done_idx.
- Switch to use put_user() in __vhost_add_used_n() if there's only one used
- Keep the max pending check based on Michael's suggestion.
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() returns void
vhost_net: use
On 08/25/2013 07:53 PM, Michael S. Tsirkin wrote:
On Fri, Aug 23, 2013 at 04:55:49PM +0800, Jason Wang wrote:
On 08/20/2013 10:48 AM, Jason Wang wrote:
On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
We used to limit the max
On 08/20/2013 10:33 AM, Jason Wang wrote:
On 08/16/2013 05:54 PM, Michael S. Tsirkin wrote:
On Fri, Aug 16, 2013 at 01:16:26PM +0800, Jason Wang wrote:
Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
vhost_add_used_and_signal(). With the patch we will call at most 2
On 08/20/2013 10:48 AM, Jason Wang wrote:
On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
We used to limit the max pending DMAs to prevent guest from pinning too
many
pages. But this could be removed since:
- We have
On 08/16/2013 05:54 PM, Michael S. Tsirkin wrote:
On Fri, Aug 16, 2013 at 01:16:26PM +0800, Jason Wang wrote:
Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
vhost_add_used_and_signal(). With the patch we will call at most 2 times
(consider done_idx warp around
On 08/16/2013 05:56 PM, Michael S. Tsirkin wrote:
On Fri, Aug 16, 2013 at 01:16:27PM +0800, Jason Wang wrote:
Let vhost_add_used() to use vhost_add_used_n() to reduce the code
duplication.
Signed-off-by: Jason Wang jasow...@redhat.com
Does compiler inline it then?
Reason I ask, last
On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
We used to limit the max pending DMAs to prevent guest from pinning too many
pages. But this could be removed since:
- We have the sk_wmem_alloc check in both tun/macvtap to do
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers
Hi all:
This series tries to unify and simplify vhost codes especially for zerocopy.
Plase review.
Thanks
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() returns void
vhost_net: use vhost_add_used_and_signal_n() in
vhost_zerocopy_signal_used()
vhost: switch to use
from guest. Guest can easily exceeds the limitation.
- We've already check upend_idx != done_idx and switch to non zerocopy then. So
even if all vq-heads were used, we can still does the packet transmission.
So remove this check completely.
Signed-off-by: Jason Wang jasow...@redhat.com
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determine zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang jasow
Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
vhost_add_used_and_signal(). With the patch we will call at most 2 times
(consider done_idx warp around) compared to N times w/o this patch.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 13
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always poll the vhost thread before DMA is done.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/vhost.c | 43 ++-
1 files changed, 2 insertions(+), 41 deletions(-)
diff --git a/drivers/vhost/vhost.c b
On 07/25/2013 04:54 PM, Jason Wang wrote:
We try to handle the hypervisor compatibility mode by detecting hypervisor
through a specific order. This is not robust, since hypervisors may implement
each others features.
This patch tries to handle this situation by always choosing the last one
101 - 200 of 716 matches
Mail list logo