> We already tune these values in the VM. Would you suggest tuning them on the
> compute nodes as well?
No need on compute nodes.(AFAIK)
How much pps your vm need to handle?
You can monitor CPU usage ,especially si to see where may drop. If you see
vhost almost reach to 100% CPU ,multi queue
Hi everyone,
The User Committee will be meeting on 07/31/2017 since we have items on the
agenda[1], please feel free to append additional topics as well. You can
find meeting details on eavesdrop[2].
[1] https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#
Hi Liping,
Thank you for the detailed response! I've gone over our environment and
checked the various values.
First I found that we are dropping packets on the physcial nics as well as
inside the instance (though only when its UDP receive buffer overflows).
Our physical nics are using the
It is merged in Mitaka but your glance images must be decorated with:
hw_vif_multiqueue_enabled='true'
when you do "openstack image show uuid"
in the property you should see this, and then you will have multiqueue
Saverio
2017-07-28 14:50 GMT+02:00 John Petrini :
>
On Jul 28, 2017 8:51 AM, "John Petrini" wrote:
Hi Saverio,
Thanks for the info. The parameter is missing completely:
I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled. Do you know if this
Hi Saverio,
Thanks for the info. The parameter is missing completely:
I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled.
Do you know if this feature is available in Mitaka?
John Petrini
Platforms Engineer //
Hello John,
a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.
check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like: