Re: [Openstack-operators] [Openstack] UDP Buffer Filling

2017-07-28 Thread Liping Mao (limao)
> We already tune these values in the VM. Would you suggest tuning them on the 
> compute nodes as well?
No need on compute nodes.(AFAIK)


How much pps your vm need to handle?
You can monitor CPU usage ,especially si to see where may drop. If you see 
vhost almost reach to 100% CPU ,multi queue may help in some case.

Thanks.

Regards,
Liping Mao

> 在 2017年7月28日,22:45,John Petrini  写道:
> 
> We already tune these values in the VM. Would you suggest tuning them on the 
> compute nodes as well?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Reminder: User Committee Meeting on 7/31/2017

2017-07-28 Thread Shamail Tahir
Hi everyone,

The User Committee will be meeting on 07/31/2017 since we have items on the
agenda[1], please feel free to append additional topics as well. You can
find meeting details on eavesdrop[2].

[1] https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#
Meeting_Agenda.2FPrevious_Meeting_Logs
[2] http://eavesdrop.openstack.org/#User_Committee_Meeting

-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread John Petrini
Hi Liping,

Thank you for the detailed response! I've gone over our environment and
checked the various values.

First I found that we are dropping packets on the physcial nics as well as
inside the instance (though only when its UDP receive buffer overflows).

Our physical nics are using the default ring size and our tap interfaces
are using the 500 tx_txdefault. There are dropped packets on the tap
interfaces but the counts are rather low and don't seem to increase very
often so I'm not sure that there's a problem there but I'm considering
adjusting the value anyway to avoid issues in the future.

We already tune these values in the VM. Would you suggest tuning them on
the compute nodes as well?
net.core.rmem_max / net.core.rmem_default / net.core.wmem_max /
net.core.rmem_default

I'm going to do some testing with multiqueues enabled since both you and
Saverio have suggested it.



John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Fri, Jul 28, 2017 at 9:25 AM, Erik McCormick 
wrote:

>
>
> On Jul 28, 2017 8:51 AM, "John Petrini"  wrote:
>
> Hi Saverio,
>
> Thanks for the info. The parameter is missing completely:
>
> 
>   
>   
>   
>   
>function='0x0'/>
> 
>
> I've came across the blueprint for adding the image property
> hw_vif_multiqueue_enabled. Do you know if this feature is available in
> Mitaka?
>
> It was merged 2 years ago so should have been there since Liberty.
>
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter]    [image: LinkedIn]
>    [image: Google Plus]
>    [image: Blog]
> 
>
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>
>
> On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:
>
>> Hello John,
>>
>> a common problem is packets being dropped when they pass from the
>> hypervisor to the instance. There is bottleneck there.
>>
>> check the 'virsh dumpxml' of one of the instances that is dropping
>> packets. Check for the interface section, should look like:
>>
>> 
>>   
>>   
>>   
>>   
>>   
>>   
>>   > function='0x0'/>
>> 
>>
>> how many queues you have ??? Usually if you have only 1 or if the
>> parameter is missing completely is not good.
>>
>> in Mitaka nova should use 1 queue for every instance CPU core you
>> have. It is worth to check if this is set correctly in your setup.
>>
>> Cheers,
>>
>> Saverio
>>
>>
>>
>> 2017-07-27 17:49 GMT+02:00 John Petrini :
>> > Hi List,
>> >
>> > We are running Mitaka with VLAN provider networking. We've recently
>> > encountered a problem where the UDP receive queue on instances is
>> filling up
>> > and we begin dropping packets. Moving instances out of OpenStack onto
>> bare
>> > metal resolves the issue completely.
>> >
>> > These instances are running asterisk which should be pulling these
>> packets
>> > off the queue but it appears to be falling behind no matter the
>> resources we
>> > give it.
>> >
>> > We can't seem to pin down a reason why we would see this behavior in
>> KVM but
>> > not on metal. I'm hoping someone on the list might have some insight or
>> > ideas.
>> >
>> > Thank You,
>> >
>> > John
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Saverio Proto
It is merged in Mitaka but your glance images must be decorated with:

hw_vif_multiqueue_enabled='true'

when you do "openstack image show uuid"

in the property you should see this, and then you will have multiqueue

Saverio



2017-07-28 14:50 GMT+02:00 John Petrini :

> Hi Saverio,
>
> Thanks for the info. The parameter is missing completely:
>
> 
>   
>   
>   
>   
>function='0x0'/>
> 
>
> I've came across the blueprint for adding the image property
> hw_vif_multiqueue_enabled. Do you know if this feature is available in
> Mitaka?
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter]    [image: LinkedIn]
>    [image: Google Plus]
>    [image: Blog]
> 
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>
> On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:
>
>> Hello John,
>>
>> a common problem is packets being dropped when they pass from the
>> hypervisor to the instance. There is bottleneck there.
>>
>> check the 'virsh dumpxml' of one of the instances that is dropping
>> packets. Check for the interface section, should look like:
>>
>> 
>>   
>>   
>>   
>>   
>>   
>>   
>>   > function='0x0'/>
>> 
>>
>> how many queues you have ??? Usually if you have only 1 or if the
>> parameter is missing completely is not good.
>>
>> in Mitaka nova should use 1 queue for every instance CPU core you
>> have. It is worth to check if this is set correctly in your setup.
>>
>> Cheers,
>>
>> Saverio
>>
>>
>>
>> 2017-07-27 17:49 GMT+02:00 John Petrini :
>> > Hi List,
>> >
>> > We are running Mitaka with VLAN provider networking. We've recently
>> > encountered a problem where the UDP receive queue on instances is
>> filling up
>> > and we begin dropping packets. Moving instances out of OpenStack onto
>> bare
>> > metal resolves the issue completely.
>> >
>> > These instances are running asterisk which should be pulling these
>> packets
>> > off the queue but it appears to be falling behind no matter the
>> resources we
>> > give it.
>> >
>> > We can't seem to pin down a reason why we would see this behavior in
>> KVM but
>> > not on metal. I'm hoping someone on the list might have some insight or
>> > ideas.
>> >
>> > Thank You,
>> >
>> > John
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Erik McCormick
On Jul 28, 2017 8:51 AM, "John Petrini"  wrote:

Hi Saverio,

Thanks for the info. The parameter is missing completely:


  
  
  
  
  


I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled. Do you know if this feature is available in
Mitaka?

It was merged 2 years ago so should have been there since Liberty.


John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]


751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
<(215)%20297-4401>   //   *E: *jpetr...@coredial.com


On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:

> Hello John,
>
> a common problem is packets being dropped when they pass from the
> hypervisor to the instance. There is bottleneck there.
>
> check the 'virsh dumpxml' of one of the instances that is dropping
> packets. Check for the interface section, should look like:
>
> 
>   
>   
>   
>   
>   
>   
>function='0x0'/>
> 
>
> how many queues you have ??? Usually if you have only 1 or if the
> parameter is missing completely is not good.
>
> in Mitaka nova should use 1 queue for every instance CPU core you
> have. It is worth to check if this is set correctly in your setup.
>
> Cheers,
>
> Saverio
>
>
>
> 2017-07-27 17:49 GMT+02:00 John Petrini :
> > Hi List,
> >
> > We are running Mitaka with VLAN provider networking. We've recently
> > encountered a problem where the UDP receive queue on instances is
> filling up
> > and we begin dropping packets. Moving instances out of OpenStack onto
> bare
> > metal resolves the issue completely.
> >
> > These instances are running asterisk which should be pulling these
> packets
> > off the queue but it appears to be falling behind no matter the
> resources we
> > give it.
> >
> > We can't seem to pin down a reason why we would see this behavior in KVM
> but
> > not on metal. I'm hoping someone on the list might have some insight or
> > ideas.
> >
> > Thank You,
> >
> > John
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread John Petrini
Hi Saverio,

Thanks for the info. The parameter is missing completely:


  
  
  
  
  


I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled.
Do you know if this feature is available in Mitaka?

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:

> Hello John,
>
> a common problem is packets being dropped when they pass from the
> hypervisor to the instance. There is bottleneck there.
>
> check the 'virsh dumpxml' of one of the instances that is dropping
> packets. Check for the interface section, should look like:
>
> 
>   
>   
>   
>   
>   
>   
>function='0x0'/>
> 
>
> how many queues you have ??? Usually if you have only 1 or if the
> parameter is missing completely is not good.
>
> in Mitaka nova should use 1 queue for every instance CPU core you
> have. It is worth to check if this is set correctly in your setup.
>
> Cheers,
>
> Saverio
>
>
>
> 2017-07-27 17:49 GMT+02:00 John Petrini :
> > Hi List,
> >
> > We are running Mitaka with VLAN provider networking. We've recently
> > encountered a problem where the UDP receive queue on instances is
> filling up
> > and we begin dropping packets. Moving instances out of OpenStack onto
> bare
> > metal resolves the issue completely.
> >
> > These instances are running asterisk which should be pulling these
> packets
> > off the queue but it appears to be falling behind no matter the
> resources we
> > give it.
> >
> > We can't seem to pin down a reason why we would see this behavior in KVM
> but
> > not on metal. I'm hoping someone on the list might have some insight or
> > ideas.
> >
> > Thank You,
> >
> > John
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Saverio Proto
Hello John,

a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.

check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like:


  
  
  
  
  
  
  


how many queues you have ??? Usually if you have only 1 or if the
parameter is missing completely is not good.

in Mitaka nova should use 1 queue for every instance CPU core you
have. It is worth to check if this is set correctly in your setup.

Cheers,

Saverio



2017-07-27 17:49 GMT+02:00 John Petrini :
> Hi List,
>
> We are running Mitaka with VLAN provider networking. We've recently
> encountered a problem where the UDP receive queue on instances is filling up
> and we begin dropping packets. Moving instances out of OpenStack onto bare
> metal resolves the issue completely.
>
> These instances are running asterisk which should be pulling these packets
> off the queue but it appears to be falling behind no matter the resources we
> give it.
>
> We can't seem to pin down a reason why we would see this behavior in KVM but
> not on metal. I'm hoping someone on the list might have some insight or
> ideas.
>
> Thank You,
>
> John
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators