From: Martinx - ? [mailto:thiagocmarti...@gmail.com]
Sent: Monday, May 30, 2016 5:01 PM
To: Bodireddy, Bhanuprakash
Cc: Christian Ehrhardt ; ; dev ; qemu-stable
at nongnu.org
Subject: Re: [ovs-dev] If 1 KVM Guest loads the virtio-pci, on top of
dpdkvhostuser OVS socket interface, it slows d
Answers inline, as follows:
On 30 May 2016 at 12:44, Bodireddy, Bhanuprakash <
bhanuprakash.bodireddy at intel.com> wrote:
> *From:* Martinx - ? [mailto:thiagocmartinsc at gmail.com]
> *Sent:* Monday, May 30, 2016 5:01 PM
> *To:* Bodireddy, Bhanuprakash
> *Cc:* Christian Ehrhardt ; <
> dev a
Hello Bhanu,
I'm a little bit confused, you said that the problem can be fixed but,
later, you also said that:
"On a Multi VM setup even with the above patch applied, one might see
aggregate throughput difference when vNIC is bind to igb_uio vs
virtio-pci"...
My idea is to use OVS with DPDK i
I could reproduce the issue and this can be fixed as below
Firstly, the throughput issues observed with other VMs when a new VM is started
can be fixed using the patch in the thread
http://openvswitch.org/pipermail/dev/2016-May/071615.html. I have put up an
explanation in this thread for the c
Hi,
I think, your issue connected to one mentioned in this threads:
http://openvswitch.org/pipermail/dev/2016-May/071115.html
http://openvswitch.org/pipermail/dev/2016-May/071517.html
You're welcome to join discussion in the first thread.
Best regards, Ilya Maximets.
5 matches
Mail list logo