[PATCH net-next RFC 1/3] virtio: support for urgent descriptors

2014-10-11 Thread Jason Wang
feature bit. Signed-off-by: Michael S. Tsirkin m...@redhat.com Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/virtio/virtio_ring.c | 75 +--- include/linux/virtio.h | 14 include/uapi/linux/virtio_ring.h | 5 ++- 3 files changed

Re: [PATCH RFC 2/2] vhost: support urgent descriptors

2014-09-22 Thread Jason Wang
On 09/22/2014 02:55 PM, Michael S. Tsirkin wrote: On Mon, Sep 22, 2014 at 11:30:23AM +0800, Jason Wang wrote: On 09/20/2014 06:00 PM, Paolo Bonzini wrote: Il 19/09/2014 09:10, Jason Wang ha scritto: -if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) { +if (vq-urgent

Re: [PATCH RFC 2/2] vhost: support urgent descriptors

2014-09-21 Thread Jason Wang
On 09/20/2014 06:00 PM, Paolo Bonzini wrote: Il 19/09/2014 09:10, Jason Wang ha scritto: - if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) { + if (vq-urgent || !vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) { So the urgent descriptor only work when event index was not enabled

Re: [PATCH RFC 2/2] vhost: support urgent descriptors

2014-09-19 Thread Jason Wang
On 07/01/2014 06:49 PM, Michael S. Tsirkin wrote: Signed-off-by: Michael S. Tsirkin m...@redhat.com --- drivers/vhost/vhost.h | 19 +-- drivers/vhost/net.c | 30 +- drivers/vhost/scsi.c | 23 +++ drivers/vhost/test.c | 5

Re: [PATCH RFC 2/2] vhost: support urgent descriptors

2014-09-19 Thread Jason Wang
On 07/01/2014 06:49 PM, Michael S. Tsirkin wrote: Signed-off-by: Michael S. Tsirkin m...@redhat.com --- drivers/vhost/vhost.h | 19 +-- drivers/vhost/net.c | 30 +- drivers/vhost/scsi.c | 23 +++ drivers/vhost/test.c | 5

Re: [Qemu-devel] [question] e1000 interrupt stormhappenedbecauseofits correspondingioapic-irr bit always set

2014-09-03 Thread Jason Wang
and for guest who has a bad irq detection routine ( such as note_interrupt() in linux ), this bad irq would be recognized soon as in the past. Signed-off-by: Jason Wang jasowang at redhat.com --- virt/kvm/ioapic.c | 47 +-- virt

Re: [Qemu-devel] [question] e1000 interrupt storm happened becauseofits correspondingioapic-irr bit always set

2014-08-28 Thread Jason Wang
On 08/27/2014 05:31 PM, Zhang Haoyu wrote: Hi, all I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC interrupt storm, because if (!ent-fields.mask (ioapic-irr (1 i))) is always true in __kvm_ioapic_update_eoi(). Any ideas? We meet this several times:

Re: [Qemu-devel] [question] e1000 interrupt storm happenedbecauseofits correspondingioapic-irr bit always set

2014-08-28 Thread Jason Wang
routine ( such as note_interrupt() in linux ), this bad irq would be recognized soon as in the past. Signed-off-by: Jason Wang jasowang at redhat.com --- virt/kvm/ioapic.c | 47 +-- virt/kvm/ioapic.h |2 ++ 2 files changed, 47 insertions

Re: [Qemu-devel] [question] e1000 interrupt storm happenedbecauseofitscorrespondingioapic-irr bit always set

2014-08-28 Thread Jason Wang
On 08/29/2014 12:07 PM, Zhang, Yang Z wrote: Zhang Haoyu wrote on 2014-08-29: Hi, Yang, Gleb, Michael, Could you help review below patch please? I don't quite understand the background. Why ioacpi-irr is setting before EOI? It should be driver's responsibility to clear the interrupt before

Re: [Qemu-devel] [question] e1000 interrupt storm happened becauseof its correspondingioapic-irr bit always set

2014-08-26 Thread Jason Wang
On 08/26/2014 05:28 PM, Zhang Haoyu wrote: Hi, all I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC interrupt storm, because if (!ent-fields.mask (ioapic-irr (1 i))) is always true in __kvm_ioapic_update_eoi(). Any ideas? We meet this several times: search

Re: [question] e1000 interrupt storm happened becauseof its corresponding ioapic-irr bit always set

2014-08-25 Thread Jason Wang
On 08/25/2014 03:17 PM, Zhang Haoyu wrote: Hi, all I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC interrupt storm, because if (!ent-fields.mask (ioapic-irr (1 i))) is always true in __kvm_ioapic_update_eoi(). Any ideas? We meet this several times: search

Re: [question] e1000 interrupt storm happened becauseof its corresponding ioapic-irr bit always set

2014-08-25 Thread Jason Wang
On 08/25/2014 03:17 PM, Zhang Haoyu wrote: Hi, all I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC interrupt storm, because if (!ent-fields.mask (ioapic-irr (1 i))) is always true in __kvm_ioapic_update_eoi(). Any ideas? We meet this several times:

Re: [Qemu-devel] [question] e1000 interrupt storm happened because of its corresponding ioapic-irr bit always set

2014-08-24 Thread Jason Wang
On 08/23/2014 06:36 PM, Zhang Haoyu wrote: Hi, all I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC interrupt storm, because if (!ent-fields.mask (ioapic-irr (1 i))) is always true in __kvm_ioapic_update_eoi(). Any ideas? We meet this several times: search

Re: [question] one vhost kthread servers mulitiple tx/rx queues which blong to one virtio-net device

2014-08-22 Thread Jason Wang
On 08/22/2014 10:30 AM, Zhang Haoyu wrote: Hi, Krishna, Shirley How got get the latest patch of M:N Implementation of mulitiqueue, I am going to test the the combination of M:N Implementation of mulitiqueue and vhost: add polling mode. Thanks, Zhang Haoyu Just FYI. You may refer

Re: [PATCH net-next] vhost_net: stop rx net polling when possible

2014-08-17 Thread Jason Wang
On 08/17/2014 06:20 PM, Michael S. Tsirkin wrote: On Fri, Aug 15, 2014 at 11:40:08AM +0800, Jason Wang wrote: After rx vq was enabled, we never stop polling its socket. This is sub optimal when may lead unnecessary wake-ups after the rx net work has already been queued. This could

Re: Query: Is it possible to lose interrupts between vhost and virtio_net during migration?

2014-08-17 Thread Jason Wang
On 08/17/2014 06:22 PM, Michael S. Tsirkin wrote: On Fri, Aug 15, 2014 at 10:55:32AM +0800, Jason Wang wrote: I wonder if k-set_guest_notifiers should be called after hdev-started = true; in vhost_dev_start. Michael, can we just remove those assertions? Since you may want to set guest

Re: Query: Is it possible to lose interrupts between vhost and virtio_net during migration?

2014-08-14 Thread Jason Wang
On 08/07/2014 08:47 PM, Zhangjie (HZ) wrote: On 2014/8/5 20:14, Zhangjie (HZ) wrote: On 2014/8/5 17:49, Michael S. Tsirkin wrote: On Tue, Aug 05, 2014 at 02:29:28PM +0800, Zhangjie (HZ) wrote: Jason is right, the new order is not the cause of network unreachable. Changing order seems not

Re: Query: Is it possible to lose interrupts between vhost and virtio_net during migration?

2014-08-14 Thread Jason Wang
On 08/14/2014 06:02 PM, Michael S. Tsirkin wrote: On Thu, Aug 14, 2014 at 04:52:40PM +0800, Jason Wang wrote: On 08/07/2014 08:47 PM, Zhangjie (HZ) wrote: On 2014/8/5 20:14, Zhangjie (HZ) wrote: On 2014/8/5 17:49, Michael S. Tsirkin wrote: On Tue, Aug 05, 2014 at 02:29:28PM +0800, Zhangjie

[PATCH net-next] vhost_net: stop rx net polling when possible

2014-08-14 Thread Jason Wang
/normalized thru/ 256/1/+1.9004%-4.7985% +7.0366% 256/25/-4.7366% -11.0809%+7.1349% 256/50/+3.9808% -5.2037% +9.6887% 4096/1/+2.1619% -0.7303% +2.9134% 4096/25/-13.1836% -14.7298%+1.8134% 4096/50/-11.1990% -15.4763%+5.0605% Signed-off-by: Jason Wang jasow...@redhat.com

Re: [PATCH] vhost: Add polling mode

2014-07-23 Thread Jason Wang
On 07/23/2014 04:12 PM, Razya Ladelsky wrote: Jason Wang jasow...@redhat.com wrote on 23/07/2014 08:26:36 AM: From: Jason Wang jasow...@redhat.com To: Razya Ladelsky/Haifa/IBM@IBMIL, kvm@vger.kernel.org, Michael S. Tsirkin m...@redhat.com, Cc: abel.gor...@gmail.com, Joel Nider/Haifa/IBM

Re: [PATCH] vhost: Add polling mode

2014-07-23 Thread Jason Wang
On 07/23/2014 04:48 PM, Abel Gordon wrote: On Wed, Jul 23, 2014 at 11:42 AM, Jason Wang jasow...@redhat.com wrote: On 07/23/2014 04:12 PM, Razya Ladelsky wrote: Jason Wang jasow...@redhat.com wrote on 23/07/2014 08:26:36 AM: From: Jason Wang jasow...@redhat.com To: Razya Ladelsky

Re: [PATCH] vhost: Add polling mode

2014-07-22 Thread Jason Wang
On 07/21/2014 09:23 PM, Razya Ladelsky wrote: Hello All, When vhost is waiting for buffers from the guest driver (e.g., more packets to send in vhost-net's transmit queue), it normally goes to sleep and waits for the guest to kick it. This kick involves a PIO in the guest, and therefore

Re: [PATCH RFC V2 4/4] tools: virtio: add a top-like utility for displaying vhost satistics

2014-04-10 Thread Jason Wang
On Thu, 2014-04-10 at 17:27 +0800, Fam Zheng wrote: On Fri, 03/21 17:41, Jason Wang wrote: This patch adds simple python to display vhost satistics of vhost, the codes were based on kvm_stat script from qemu. As work function has been recored, filters could be used to distinguish which

Re: vhost-[pid] 100% CPU

2014-04-08 Thread Jason Wang
On Tue, 2014-04-08 at 16:49 -0400, Simon Chen wrote: A little update on this.. I turned on multiqueue of vhost-net. Now the receiving VM is getting traffic over all four queues - based on the CPU usage of the four vhost-[pid] threads. For some reason, the sender is now pegging 100% on one

[PATCH RFC V2 4/4] tools: virtio: add a top-like utility for displaying vhost satistics

2014-03-21 Thread Jason Wang
) 707 0 vhost_work_queue_wakeup(rx_kick) 9 0 Signed-off-by: Jason Wang jasow...@redhat.com --- tools/virtio/vhost_stat | 375 1 file changed, 375 insertions(+) create mode 100755 tools/virtio/vhost_stat diff

[PATCH RFC V2 3/4] vhost_net: add basic tracepoints for vhost_net

2014-03-21 Thread Jason Wang
To help performance analyze and debugging, this patch introduces tracepoints for vhost_net. Two tracepoints were introduced, packets sending and receiving. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 5 + drivers/vhost/net_trace.h | 53

[PATCH RFC V2 2/4] vhost: basic tracepoints

2014-03-21 Thread Jason Wang
To help for the performance optimizations and debugging, this patch tracepoints for vhost. Two kinds of activities were traced: virtio and vhost work queuing/wakeup. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 1 + drivers/vhost/trace.h | 175

[PATCH RFC V2 0/4] Adding tracepoints to vhost/net

2014-03-21 Thread Jason Wang
/478 Jason Wang (4): vhost: introduce queue_index for tracing vhost: basic tracepoints vhost_net: add basic tracepoints for vhost_net tools: virtio: add a top-like utility for displaying vhost satistics drivers/vhost/net.c | 7 + drivers/vhost/net_trace.h | 53 +++ drivers

[PATCH RFC V2 1/4] vhost: introduce queue_index for tracing

2014-03-21 Thread Jason Wang
Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 1 + drivers/vhost/vhost.h | 3 +++ 2 files changed, 4 insertions(+) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index a0fa5de..85d666c 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -708,6 +708,7

Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit

2014-03-13 Thread Jason Wang
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote: On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote: We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too

Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit

2014-03-09 Thread Jason Wang
On 03/08/2014 05:39 AM, David Miller wrote: From: Jason Wang jasow...@redhat.com Date: Fri, 7 Mar 2014 13:28:27 +0800 This is because the delay added by htb may lead the delay the finish of DMAs and cause the pending DMAs for tap0 exceeds the limit (VHOST_MAX_PEND). In this case vhost stop

[PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit

2014-03-06 Thread Jason Wang
when unlimited sndbuf. We still need a solution for limited sndbuf. Cc: Michael S. Tsirkin m...@redhat.com Cc: Qin Chuanyu qinchua...@huawei.com Signed-off-by: Jason Wang jasow...@redhat.com --- Changes from V1: - Remove VHOST_MAX_PEND and switch to use half of the vq size as the limit - Add cpu

Re: [PATCH] vhost: poll vhost_net only when tx notification is enabled

2014-02-27 Thread Jason Wang
On 02/26/2014 07:16 PM, Michael S. Tsirkin wrote: Please see MAINTAINERS and copy all relevant lists. On Wed, Feb 26, 2014 at 05:20:09PM +0800, Qin Chuanyu wrote: guest kick host base on avail_ring flags value and get perfermance typo improved, vhost_zerocopy_callback could do the same

Re: [PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit

2014-02-26 Thread Jason Wang
On 02/26/2014 05:23 PM, Michael S. Tsirkin wrote: On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote: On 02/26/2014 02:32 PM, Qin Chuanyu wrote: On 2014/2/26 13:53, Jason Wang wrote: On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason

Re: [PATCH] vhost: make vhost_zerocopy_callback more efficient by poll_queue base on vhost status

2014-02-25 Thread Jason Wang
On 02/25/2014 03:53 PM, Qin Chuanyu wrote: On 2014/2/25 15:38, Jason Wang wrote: On 02/25/2014 02:55 PM, Qin Chuanyu wrote: guest kick vhost base on vring flag status and get perfermance improved, vhost_zerocopy_callback could do this in the same way, as virtqueue_enable_cb need one more

Re: [PATCH] vhost: make vhost_zerocopy_callback more efficient by poll_queue base on vhost status

2014-02-25 Thread Jason Wang
On 02/25/2014 04:56 PM, Qin Chuanyu wrote: On 2014/2/25 16:13, Jason Wang wrote: On 02/25/2014 03:53 PM, Qin Chuanyu wrote: On 2014/2/25 15:38, Jason Wang wrote: On 02/25/2014 02:55 PM, Qin Chuanyu wrote: guest kick vhost base on vring flag status and get perfermance improved

Re: [PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit

2014-02-25 Thread Jason Wang
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive

Re: [PATCH] vhost: make vhost_zerocopy_callback more efficient by poll_queue base on vhost status

2014-02-25 Thread Jason Wang
- Original Message - guest kick vhost base on vring flag status and get perfermance improved, vhost_zerocopy_callback could do this in the same way, as virtqueue_enable_cb need one more check after change the status of avail_ring flags, vhost also do the same thing after

Re: [PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit

2014-02-25 Thread Jason Wang
On 02/26/2014 02:32 PM, Qin Chuanyu wrote: On 2014/2/26 13:53, Jason Wang wrote: On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND

Re: [PATCH] bridge: orphan frags on local receive

2014-02-24 Thread Jason Wang
On 02/24/2014 09:12 PM, Qin Chuanyu wrote: with vhost tx zero_copy, guest nic might get hang when host reserving skb in socket queue delivered by guest, the case has been solved in tun, it also been needed by bridge. This could easily happened when a LAST_ACK state tcp occuring between guest

[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit

2014-02-24 Thread Jason Wang
: Michael S. Tsirkin m...@redhat.com Cc: Qin Chuanyu qinchua...@huawei.com Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 17 +++-- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index a0fa5de..3e96e47 100644

Re: [PATCH] vhost: make vhost_zerocopy_callback more efficient by poll_queue base on vhost status

2014-02-24 Thread Jason Wang
On 02/25/2014 02:55 PM, Qin Chuanyu wrote: guest kick vhost base on vring flag status and get perfermance improved, vhost_zerocopy_callback could do this in the same way, as virtqueue_enable_cb need one more check after change the status of avail_ring flags, vhost also do the same thing after

Re: [virtio-dev] [PATCH net v2] vhost: fix a theoretical race in device cleanup

2014-02-13 Thread Jason Wang
*/ + synchronize_rcu_bh(); /* We do an extra flush before freeing memory, * since jobs can re-queue themselves. */ vhost_net_flush(n); Acked-by: Jason Wang jasow...@redhat.com -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More

Re: [PATCH net v2] vhost: fix ref cnt checking deadlock

2014-02-13 Thread Jason Wang
); mutex_lock(n-vqs[VHOST_NET_VQ_TX].vq.mutex); n-tx_flush = false; - kref_init(n-vqs[VHOST_NET_VQ_TX].ubufs-kref); + atomic_set(n-vqs[VHOST_NET_VQ_TX].ubufs-refcount, 1); mutex_unlock(n-vqs[VHOST_NET_VQ_TX].vq.mutex); } } Acked-by: Jason

Re: [PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done

2014-02-12 Thread Jason Wang
On 02/12/2014 03:38 PM, Qin Chuanyu wrote: On 2013/8/30 12:29, Jason Wang wrote: We used to poll vhost queue before making DMA is done, this is racy if vhost thread were waked up before marking DMA is done which can result the signal to be missed. Fix this by always poll the vhost thread

Re: [PATCH] tun: use netif_receive_skb instead of netif_rx_ni

2014-02-11 Thread Jason Wang
On 02/11/2014 10:25 PM, Qin Chuanyu wrote: we could xmit directly instead of going through softirq to gain throughput and lantency improved. test model: VM-Host-Host just do transmit. with vhost thread and nic interrupt bind cpu1. netperf do throuhput test and qperf do lantency test. Host

Re: [PATCH] tun: use netif_receive_skb instead of netif_rx_ni

2014-02-11 Thread Jason Wang
On 02/12/2014 01:47 PM, Eric Dumazet wrote: On Wed, 2014-02-12 at 13:28 +0800, Jason Wang wrote: A question: without NAPI weight, could this starve other net devices? Not really, as net devices are serviced by softirq handler. Yes, then the issue is tun could be starved by other net devices

[PATCH net] vhost_net: do not report a used len larger than receive buffer size

2014-02-11 Thread Jason Wang
) Cc: Michael S. Tsirkin m...@redhat.com Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 9a68409..06268a0 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost

Re: [PATCH] tun: use netif_receive_skb instead of netif_rx_ni

2014-02-11 Thread Jason Wang
On 02/12/2014 02:26 PM, Eric Dumazet wrote: On Wed, 2014-02-12 at 13:50 +0800, Jason Wang wrote: On 02/12/2014 01:47 PM, Eric Dumazet wrote: On Wed, 2014-02-12 at 13:28 +0800, Jason Wang wrote: A question: without NAPI weight, could this starve other net devices? Not really, as net devices

Re: [PATCH] tun: use netif_receive_skb instead of netif_rx_ni

2014-02-11 Thread Jason Wang
On 02/12/2014 02:46 PM, Qin Chuanyu wrote: On 2014/2/12 13:28, Jason Wang wrote: A question: without NAPI weight, could this starve other net devices? tap xmit skb use thread context,the poll func of physical nic driver could be called in softirq context without change. I had test

Re: kvm virtio ethernet ring on guest side over high throughput (packet per second)

2014-01-22 Thread Jason Wang
On 01/22/2014 11:22 PM, Stefan Hajnoczi wrote: On Tue, Jan 21, 2014 at 04:06:05PM -0200, Alejandro Comisario wrote: CCed Michael Tsirkin and Jason Wang who work on KVM networking. Hi guys, we had in the past when using physical servers, several throughput issues regarding the throughput

Re: kvm virtio ethernet ring on guest side over high throughput (packet per second)

2014-01-22 Thread Jason Wang
: On Tue, Jan 21, 2014 at 04:06:05PM -0200, Alejandro Comisario wrote: CCed Michael Tsirkin and Jason Wang who work on KVM networking. Hi guys, we had in the past when using physical servers, several throughput issues regarding the throughput of our APIS, in our case we measure this with packets

Re: [PATCH] virtio-scsi: Fix hotcpu_notifier use-after-free with virtscsi_freeze

2013-12-16 Thread Jason Wang
On 12/17/2013 11:09 AM, Rusty Russell wrote: Jason Wang jasow...@redhat.com writes: On 10/28/2013 04:01 PM, Asias He wrote: vqs are freed in virtscsi_freeze but the hotcpu_notifier is not unregistered. We will have a use-after-free usage when the notifier callback is called after

[PATCH V2] virtio-scsi: Fix hotcpu_notifier use-after-free with virtscsi_freeze

2013-12-16 Thread Jason Wang
affinity when doing cpu hotplug) Cc: sta...@vger.kernel.org Signed-off-by: Asias He asias.he...@gmail.com Reviewed-by: Paolo Bonzini pbonz...@redhat.com Signed-off-by: Jason Wang jasow...@redhat.com --- Changes from V1: - Add Fixes line - CC stable --- drivers/scsi/virtio_scsi.c | 15 ++- 1

Re: [PATCH] virtio-scsi: Fix hotcpu_notifier use-after-free with virtscsi_freeze

2013-12-11 Thread Jason Wang
On 10/28/2013 04:01 PM, Asias He wrote: vqs are freed in virtscsi_freeze but the hotcpu_notifier is not unregistered. We will have a use-after-free usage when the notifier callback is called after virtscsi_freeze. Signed-off-by: Asias He as...@redhat.com --- drivers/scsi/virtio_scsi.c | 15

Re: Elvis upstreaming plan

2013-11-26 Thread Jason Wang
On 11/24/2013 05:22 PM, Razya Ladelsky wrote: Hi all, I am Razya Ladelsky, I work at IBM Haifa virtualization team, which developed Elvis, presented by Abel Gordon at the last KVM forum: ELVIS video: https://www.youtube.com/watch?v=9EyweibHfEs ELVIS slides:

Re: virtio-net: how to prevent receiving big packages?

2013-11-03 Thread Jason Wang
On 11/03/2013 04:07 PM, wangsitan wrote: Hi all, A virtual net interface using virtio_net with TSO on may send big TCP packets (up to 64KB). The receiver will get big packets if it's virtio_net, too. But it will get common packets (according to MTU) if the receiver is e1000 (who

Re: virtio-net: how to prevent receiving big packages?

2013-11-03 Thread Jason Wang
On 11/04/2013 12:35 PM, Jason Wang wrote: On 11/03/2013 04:07 PM, wangsitan wrote: Hi all, A virtual net interface using virtio_net with TSO on may send big TCP packets (up to 64KB). The receiver will get big packets if it's virtio_net, too. But it will get common packets (according to MTU

Re: [PATCH] virtio-scsi: Fix hotcpu_notifier use-after-free with virtscsi_freeze

2013-10-28 Thread Jason Wang
; + + err = register_hotcpu_notifier(vscsi-nb); + if (err) + vdev-config-del_vqs(vdev); - return virtscsi_init(vdev, vscsi); + return err; } #endif Acked-by: Jason Wang jasow...@redhat.com -- To unsubscribe from this list: send the line unsubscribe kvm in the body

Re: virtio: Large number of tcp connections, vhost_net seems to be a bottleneck

2013-10-23 Thread Jason Wang
On 10/20/2013 04:04 PM, Sahid Ferdjaoui wrote: Hi all, I'm working on create a large number of tcp connections on a guest; The environment is on OpenStack: Host (dedicated compute node): OS/Kernel: Ubuntu/3.2 Cpus: 24 Mems: 128GB Guest (alone on the Host): OS/Kernel: Ubuntu/3.2

Re: [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time

2013-09-29 Thread Jason Wang
On 09/26/2013 12:30 PM, Jason Wang wrote: On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote: On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote: On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote: On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: Currently, even

Re: [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time

2013-09-25 Thread Jason Wang
On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote: On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote: On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote: On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: Currently, even if the packet length is smaller than

Re: [PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time

2013-09-04 Thread Jason Wang
On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote: On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if upend_idx != done_idx we still set zcopy_used to true and rollback this choice later. This could

Re: [PATCH v2 net-next] pkt_sched: fq: Fair Queue packet scheduler

2013-09-04 Thread Jason Wang
On 09/04/2013 07:59 PM, Daniel Borkmann wrote: On 09/04/2013 01:27 PM, Eric Dumazet wrote: On Wed, 2013-09-04 at 03:30 -0700, Eric Dumazet wrote: On Wed, 2013-09-04 at 14:30 +0800, Jason Wang wrote: And tcpdump would certainly help ;) See attachment. Nothing obvious on tcpdump (only

Re: [PATCH V2 2/6] vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used()

2013-09-02 Thread Jason Wang
On 09/02/2013 01:50 PM, Michael S. Tsirkin wrote: On Fri, Aug 30, 2013 at 12:29:18PM +0800, Jason Wang wrote: We tend to batch the used adding and signaling in vhost_zerocopy_callback() which may result more than 100 used buffers to be updated in vhost_zerocopy_signal_used() in some cases

Re: [PATCH V2 1/6] vhost_net: make vhost_zerocopy_signal_used() returns void

2013-09-02 Thread Jason Wang
On 09/02/2013 01:51 PM, Michael S. Tsirkin wrote: tweak subj s/returns/return/ On Fri, Aug 30, 2013 at 12:29:17PM +0800, Jason Wang wrote: None of its caller use its return value, so let it return void. Signed-off-by: Jason Wang jasow...@redhat.com --- Will correct it in v3

Re: [PATCH V2 6/6] vhost_net: correctly limit the max pending buffers

2013-09-02 Thread Jason Wang
On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote: On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote: As Michael point out, We used to limit the max pending DMAs to get better cache utilization. But it was not done correctly since it was one done when there's no new buffers

Re: [PATCH V2 6/6] vhost_net: correctly limit the max pending buffers

2013-09-02 Thread Jason Wang
On 09/02/2013 02:30 PM, Jason Wang wrote: On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote: On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote: As Michael point out, We used to limit the max pending DMAs to get better cache utilization. But it was not done correctly since

[PATCH V3 5/6] vhost_net: poll vhost queue after marking DMA is done

2013-09-02 Thread Jason Wang
We used to poll vhost queue before making DMA is done, this is racy if vhost thread were waked up before marking DMA is done which can result the signal to be missed. Fix this by always polling the vhost thread before DMA is done. Signed-off-by: Jason Wang jasow...@redhat.com --- - The patch

[PATCH V3 3/6] vhost: switch to use vhost_add_used_n()

2013-09-02 Thread Jason Wang
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication. To avoid the overhead brought by __copy_to_user(). We will use put_user() when one used need to be added. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/vhost.c | 54

[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time

2013-09-02 Thread Jason Wang
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if upend_idx != done_idx we still set zcopy_used to true and rollback this choice later. This could be avoided by determining zerocopy once by checking all conditions at one time before. Signed-off-by: Jason Wang jasow

[PATCH V3 6/6] vhost_net: correctly limit the max pending buffers

2013-09-02 Thread Jason Wang
into main loop. Tests shows about 5%-10% improvement on per cpu throughput for guest tx. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 18 +++--- 1 files changed, 7 insertions(+), 11 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index

[PATCH V3 2/6] vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used()

2013-09-02 Thread Jason Wang
much less times of used index updating and memory barriers. 2% performance improvement were seen on netperf TCP_RR test. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 13 - 1 files changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/vhost/net.c b

[PATCH V3 1/6] vhost_net: make vhost_zerocopy_signal_used() return void

2013-09-02 Thread Jason Wang
None of its caller use its return value, so let it return void. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c |5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 969a859..280ee66 100644 --- a/drivers

[PATCH V3 0/6] vhost code cleanup and minor enhancement

2013-09-02 Thread Jason Wang
based on Michael's suggestion. Jason Wang (6): vhost_net: make vhost_zerocopy_signal_used() return void vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used() vhost: switch to use vhost_add_used_n() vhost_net: determine whether or not to use zerocopy at one time

Re: [PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done

2013-09-01 Thread Jason Wang
On 08/31/2013 12:44 AM, Ben Hutchings wrote: On Fri, 2013-08-30 at 12:29 +0800, Jason Wang wrote: We used to poll vhost queue before making DMA is done, this is racy if vhost thread were waked up before marking DMA is done which can result the signal to be missed. Fix this by always poll

Re: [PATCH V2 4/6] vhost_net: determine whether or not to use zerocopy at one time

2013-09-01 Thread Jason Wang
On 08/31/2013 02:35 AM, Sergei Shtylyov wrote: Hello. On 08/30/2013 08:29 AM, Jason Wang wrote: Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if upend_idx != done_idx we still set zcopy_used to true and rollback this choice later. This could be avoided

Re: Is fallback vhost_net to qemu for live migrate available?

2013-09-01 Thread Jason Wang
On 08/31/2013 12:45 PM, Qin Chuanyu wrote: On 2013/8/30 0:08, Anthony Liguori wrote: Hi Qin, By change the memory copy and notify mechanism ,currently virtio-net with vhost_net could run on Xen with good performance。 I think the key in doing this would be to implement a property ioeventfd

Re: [PATCH 6/6] vhost_net: remove the max pending check

2013-08-29 Thread Jason Wang
On 08/25/2013 07:53 PM, Michael S. Tsirkin wrote: On Fri, Aug 23, 2013 at 04:55:49PM +0800, Jason Wang wrote: On 08/20/2013 10:48 AM, Jason Wang wrote: On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote: On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote: We used to limit the max

[PATCH V2 6/6] vhost_net: correctly limit the max pending buffers

2013-08-29 Thread Jason Wang
into main loop. Tests shows about 5%-10% improvement on per cpu throughput for guest tx. But a 5% drop on per cpu transaction rate for a single session TCP_RR. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 15 --- 1 files changed, 4 insertions(+), 11 deletions

[PATCH V2 4/6] vhost_net: determine whether or not to use zerocopy at one time

2013-08-29 Thread Jason Wang
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if upend_idx != done_idx we still set zcopy_used to true and rollback this choice later. This could be avoided by determine zerocopy once by checking all conditions at one time before. Signed-off-by: Jason Wang jasow

[PATCH V2 2/6] vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used()

2013-08-29 Thread Jason Wang
much more less times of used index updating and memory barriers. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 13 - 1 files changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 280ee66..8a6dd0d 100644

[PATCH V2 3/6] vhost: switch to use vhost_add_used_n()

2013-08-29 Thread Jason Wang
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/vhost.c | 54 ++-- 1 files changed, 12 insertions(+), 42 deletions(-) diff --git a/drivers/vhost/vhost.c b

[PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done

2013-08-29 Thread Jason Wang
We used to poll vhost queue before making DMA is done, this is racy if vhost thread were waked up before marking DMA is done which can result the signal to be missed. Fix this by always poll the vhost thread before DMA is done. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost

[PATCH V2 1/6] vhost_net: make vhost_zerocopy_signal_used() returns void

2013-08-29 Thread Jason Wang
None of its caller use its return value, so let it return void. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c |5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 969a859..280ee66 100644 --- a/drivers

[PATCH V2 0/6] vhost code cleanup and minor enhancement

2013-08-29 Thread Jason Wang
!= done_idx to (upend_idx + 1) % UIO_MAXIOV == done_idx. - Switch to use put_user() in __vhost_add_used_n() if there's only one used - Keep the max pending check based on Michael's suggestion. Jason Wang (6): vhost_net: make vhost_zerocopy_signal_used() returns void vhost_net: use

Re: [PATCH 6/6] vhost_net: remove the max pending check

2013-08-26 Thread Jason Wang
On 08/25/2013 07:53 PM, Michael S. Tsirkin wrote: On Fri, Aug 23, 2013 at 04:55:49PM +0800, Jason Wang wrote: On 08/20/2013 10:48 AM, Jason Wang wrote: On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote: On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote: We used to limit the max

Re: [PATCH 2/6] vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used()

2013-08-23 Thread Jason Wang
On 08/20/2013 10:33 AM, Jason Wang wrote: On 08/16/2013 05:54 PM, Michael S. Tsirkin wrote: On Fri, Aug 16, 2013 at 01:16:26PM +0800, Jason Wang wrote: Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to vhost_add_used_and_signal(). With the patch we will call at most 2

Re: [PATCH 6/6] vhost_net: remove the max pending check

2013-08-23 Thread Jason Wang
On 08/20/2013 10:48 AM, Jason Wang wrote: On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote: On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote: We used to limit the max pending DMAs to prevent guest from pinning too many pages. But this could be removed since: - We have

Re: [PATCH 2/6] vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used()

2013-08-19 Thread Jason Wang
On 08/16/2013 05:54 PM, Michael S. Tsirkin wrote: On Fri, Aug 16, 2013 at 01:16:26PM +0800, Jason Wang wrote: Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to vhost_add_used_and_signal(). With the patch we will call at most 2 times (consider done_idx warp around

Re: [PATCH 3/6] vhost: switch to use vhost_add_used_n()

2013-08-19 Thread Jason Wang
On 08/16/2013 05:56 PM, Michael S. Tsirkin wrote: On Fri, Aug 16, 2013 at 01:16:27PM +0800, Jason Wang wrote: Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication. Signed-off-by: Jason Wang jasow...@redhat.com Does compiler inline it then? Reason I ask, last

Re: [PATCH 6/6] vhost_net: remove the max pending check

2013-08-19 Thread Jason Wang
On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote: On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote: We used to limit the max pending DMAs to prevent guest from pinning too many pages. But this could be removed since: - We have the sk_wmem_alloc check in both tun/macvtap to do

[PATCH 1/6] vhost_net: make vhost_zerocopy_signal_used() returns void

2013-08-15 Thread Jason Wang
None of its caller use its return value, so let it return void. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c |5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 969a859..280ee66 100644 --- a/drivers

[PATCH 0/6] vhost code cleanup and minor enhancement

2013-08-15 Thread Jason Wang
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. Plase review. Thanks Jason Wang (6): vhost_net: make vhost_zerocopy_signal_used() returns void vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used() vhost: switch to use

[PATCH 6/6] vhost_net: remove the max pending check

2013-08-15 Thread Jason Wang
from guest. Guest can easily exceeds the limitation. - We've already check upend_idx != done_idx and switch to non zerocopy then. So even if all vq-heads were used, we can still does the packet transmission. So remove this check completely. Signed-off-by: Jason Wang jasow...@redhat.com

[PATCH 4/6] vhost_net: determine whether or not to use zerocopy at one time

2013-08-15 Thread Jason Wang
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if upend_idx != done_idx we still set zcopy_used to true and rollback this choice later. This could be avoided by determine zerocopy once by checking all conditions at one time before. Signed-off-by: Jason Wang jasow

[PATCH 2/6] vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used()

2013-08-15 Thread Jason Wang
Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to vhost_add_used_and_signal(). With the patch we will call at most 2 times (consider done_idx warp around) compared to N times w/o this patch. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/net.c | 13

[PATCH 5/6] vhost_net: poll vhost queue after marking DMA is done

2013-08-15 Thread Jason Wang
We used to poll vhost queue before making DMA is done, this is racy if vhost thread were waked up before marking DMA is done which can result the signal to be missed. Fix this by always poll the vhost thread before DMA is done. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost

[PATCH 3/6] vhost: switch to use vhost_add_used_n()

2013-08-15 Thread Jason Wang
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication. Signed-off-by: Jason Wang jasow...@redhat.com --- drivers/vhost/vhost.c | 43 ++- 1 files changed, 2 insertions(+), 41 deletions(-) diff --git a/drivers/vhost/vhost.c b

Re: [PATCH V2 4/4] x86: correctly detect hypervisor

2013-08-04 Thread Jason Wang
On 07/25/2013 04:54 PM, Jason Wang wrote: We try to handle the hypervisor compatibility mode by detecting hypervisor through a specific order. This is not robust, since hypervisors may implement each others features. This patch tries to handle this situation by always choosing the last one

<    1   2   3   4   5   6   7   8   >