Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread John Petrini
Hi Liping,

Thank you for the detailed response! I've gone over our environment and
checked the various values.

First I found that we are dropping packets on the physcial nics as well as
inside the instance (though only when its UDP receive buffer overflows).

Our physical nics are using the default ring size and our tap interfaces
are using the 500 tx_txdefault. There are dropped packets on the tap
interfaces but the counts are rather low and don't seem to increase very
often so I'm not sure that there's a problem there but I'm considering
adjusting the value anyway to avoid issues in the future.

We already tune these values in the VM. Would you suggest tuning them on
the compute nodes as well?
net.core.rmem_max / net.core.rmem_default / net.core.wmem_max /
net.core.rmem_default

I'm going to do some testing with multiqueues enabled since both you and
Saverio have suggested it.



John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Fri, Jul 28, 2017 at 9:25 AM, Erik McCormick 
wrote:

>
>
> On Jul 28, 2017 8:51 AM, "John Petrini"  wrote:
>
> Hi Saverio,
>
> Thanks for the info. The parameter is missing completely:
>
> 
>   
>   
>   
>   
>function='0x0'/>
> 
>
> I've came across the blueprint for adding the image property
> hw_vif_multiqueue_enabled. Do you know if this feature is available in
> Mitaka?
>
> It was merged 2 years ago so should have been there since Liberty.
>
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter]    [image: LinkedIn]
>    [image: Google Plus]
>    [image: Blog]
> 
>
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>
>
> On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:
>
>> Hello John,
>>
>> a common problem is packets being dropped when they pass from the
>> hypervisor to the instance. There is bottleneck there.
>>
>> check the 'virsh dumpxml' of one of the instances that is dropping
>> packets. Check for the interface section, should look like:
>>
>> 
>>   
>>   
>>   
>>   
>>   
>>   
>>   > function='0x0'/>
>> 
>>
>> how many queues you have ??? Usually if you have only 1 or if the
>> parameter is missing completely is not good.
>>
>> in Mitaka nova should use 1 queue for every instance CPU core you
>> have. It is worth to check if this is set correctly in your setup.
>>
>> Cheers,
>>
>> Saverio
>>
>>
>>
>> 2017-07-27 17:49 GMT+02:00 John Petrini :
>> > Hi List,
>> >
>> > We are running Mitaka with VLAN provider networking. We've recently
>> > encountered a problem where the UDP receive queue on instances is
>> filling up
>> > and we begin dropping packets. Moving instances out of OpenStack onto
>> bare
>> > metal resolves the issue completely.
>> >
>> > These instances are running asterisk which should be pulling these
>> packets
>> > off the queue but it appears to be falling behind no matter the
>> resources we
>> > give it.
>> >
>> > We can't seem to pin down a reason why we would see this behavior in
>> KVM but
>> > not on metal. I'm hoping someone on the list might have some insight or
>> > ideas.
>> >
>> > Thank You,
>> >
>> > John
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Saverio Proto
It is merged in Mitaka but your glance images must be decorated with:

hw_vif_multiqueue_enabled='true'

when you do "openstack image show uuid"

in the property you should see this, and then you will have multiqueue

Saverio



2017-07-28 14:50 GMT+02:00 John Petrini :

> Hi Saverio,
>
> Thanks for the info. The parameter is missing completely:
>
> 
>   
>   
>   
>   
>function='0x0'/>
> 
>
> I've came across the blueprint for adding the image property
> hw_vif_multiqueue_enabled. Do you know if this feature is available in
> Mitaka?
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter]    [image: LinkedIn]
>    [image: Google Plus]
>    [image: Blog]
> 
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>
> On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:
>
>> Hello John,
>>
>> a common problem is packets being dropped when they pass from the
>> hypervisor to the instance. There is bottleneck there.
>>
>> check the 'virsh dumpxml' of one of the instances that is dropping
>> packets. Check for the interface section, should look like:
>>
>> 
>>   
>>   
>>   
>>   
>>   
>>   
>>   > function='0x0'/>
>> 
>>
>> how many queues you have ??? Usually if you have only 1 or if the
>> parameter is missing completely is not good.
>>
>> in Mitaka nova should use 1 queue for every instance CPU core you
>> have. It is worth to check if this is set correctly in your setup.
>>
>> Cheers,
>>
>> Saverio
>>
>>
>>
>> 2017-07-27 17:49 GMT+02:00 John Petrini :
>> > Hi List,
>> >
>> > We are running Mitaka with VLAN provider networking. We've recently
>> > encountered a problem where the UDP receive queue on instances is
>> filling up
>> > and we begin dropping packets. Moving instances out of OpenStack onto
>> bare
>> > metal resolves the issue completely.
>> >
>> > These instances are running asterisk which should be pulling these
>> packets
>> > off the queue but it appears to be falling behind no matter the
>> resources we
>> > give it.
>> >
>> > We can't seem to pin down a reason why we would see this behavior in
>> KVM but
>> > not on metal. I'm hoping someone on the list might have some insight or
>> > ideas.
>> >
>> > Thank You,
>> >
>> > John
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Erik McCormick
On Jul 28, 2017 8:51 AM, "John Petrini"  wrote:

Hi Saverio,

Thanks for the info. The parameter is missing completely:


  
  
  
  
  


I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled. Do you know if this feature is available in
Mitaka?

It was merged 2 years ago so should have been there since Liberty.


John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]


751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
<(215)%20297-4401>   //   *E: *jpetr...@coredial.com


On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:

> Hello John,
>
> a common problem is packets being dropped when they pass from the
> hypervisor to the instance. There is bottleneck there.
>
> check the 'virsh dumpxml' of one of the instances that is dropping
> packets. Check for the interface section, should look like:
>
> 
>   
>   
>   
>   
>   
>   
>function='0x0'/>
> 
>
> how many queues you have ??? Usually if you have only 1 or if the
> parameter is missing completely is not good.
>
> in Mitaka nova should use 1 queue for every instance CPU core you
> have. It is worth to check if this is set correctly in your setup.
>
> Cheers,
>
> Saverio
>
>
>
> 2017-07-27 17:49 GMT+02:00 John Petrini :
> > Hi List,
> >
> > We are running Mitaka with VLAN provider networking. We've recently
> > encountered a problem where the UDP receive queue on instances is
> filling up
> > and we begin dropping packets. Moving instances out of OpenStack onto
> bare
> > metal resolves the issue completely.
> >
> > These instances are running asterisk which should be pulling these
> packets
> > off the queue but it appears to be falling behind no matter the
> resources we
> > give it.
> >
> > We can't seem to pin down a reason why we would see this behavior in KVM
> but
> > not on metal. I'm hoping someone on the list might have some insight or
> > ideas.
> >
> > Thank You,
> >
> > John
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread John Petrini
Hi Saverio,

Thanks for the info. The parameter is missing completely:


  
  
  
  
  


I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled.
Do you know if this feature is available in Mitaka?

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:

> Hello John,
>
> a common problem is packets being dropped when they pass from the
> hypervisor to the instance. There is bottleneck there.
>
> check the 'virsh dumpxml' of one of the instances that is dropping
> packets. Check for the interface section, should look like:
>
> 
>   
>   
>   
>   
>   
>   
>function='0x0'/>
> 
>
> how many queues you have ??? Usually if you have only 1 or if the
> parameter is missing completely is not good.
>
> in Mitaka nova should use 1 queue for every instance CPU core you
> have. It is worth to check if this is set correctly in your setup.
>
> Cheers,
>
> Saverio
>
>
>
> 2017-07-27 17:49 GMT+02:00 John Petrini :
> > Hi List,
> >
> > We are running Mitaka with VLAN provider networking. We've recently
> > encountered a problem where the UDP receive queue on instances is
> filling up
> > and we begin dropping packets. Moving instances out of OpenStack onto
> bare
> > metal resolves the issue completely.
> >
> > These instances are running asterisk which should be pulling these
> packets
> > off the queue but it appears to be falling behind no matter the
> resources we
> > give it.
> >
> > We can't seem to pin down a reason why we would see this behavior in KVM
> but
> > not on metal. I'm hoping someone on the list might have some insight or
> > ideas.
> >
> > Thank You,
> >
> > John
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Saverio Proto
Hello John,

a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.

check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like:


  
  
  
  
  
  
  


how many queues you have ??? Usually if you have only 1 or if the
parameter is missing completely is not good.

in Mitaka nova should use 1 queue for every instance CPU core you
have. It is worth to check if this is set correctly in your setup.

Cheers,

Saverio



2017-07-27 17:49 GMT+02:00 John Petrini :
> Hi List,
>
> We are running Mitaka with VLAN provider networking. We've recently
> encountered a problem where the UDP receive queue on instances is filling up
> and we begin dropping packets. Moving instances out of OpenStack onto bare
> metal resolves the issue completely.
>
> These instances are running asterisk which should be pulling these packets
> off the queue but it appears to be falling behind no matter the resources we
> give it.
>
> We can't seem to pin down a reason why we would see this behavior in KVM but
> not on metal. I'm hoping someone on the list might have some insight or
> ideas.
>
> Thank You,
>
> John
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-27 Thread John Petrini
Hi Pedro,

Thank you for the suggestion. I will look into this.

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Thu, Jul 27, 2017 at 12:25 PM, Pedro Sousa  wrote:

> Hi,
>
> have you considered to implement some network acceleration technique like
> to OVS-DPDK or SR-IOV?
>
> In these kind of workloads (voice, video) that have low latency
> requirements you might need to use something like DPDK to avoid these
> issues.
>
> Regards
>
> On Thu, Jul 27, 2017 at 4:49 PM, John Petrini 
> wrote:
>
>> Hi List,
>>
>> We are running Mitaka with VLAN provider networking. We've recently
>> encountered a problem where the UDP receive queue on instances is filling
>> up and we begin dropping packets. Moving instances out of OpenStack onto
>> bare metal resolves the issue completely.
>>
>> These instances are running asterisk which should be pulling these
>> packets off the queue but it appears to be falling behind no matter the
>> resources we give it.
>>
>> We can't seem to pin down a reason why we would see this behavior in KVM
>> but not on metal. I'm hoping someone on the list might have some insight or
>> ideas.
>>
>> Thank You,
>>
>> John
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] UDP Buffer Filling

2017-07-27 Thread John Petrini
Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is filling
up and we begin dropping packets. Moving instances out of OpenStack onto
bare metal resolves the issue completely.

These instances are running asterisk which should be pulling these packets
off the queue but it appears to be falling behind no matter the resources
we give it.

We can't seem to pin down a reason why we would see this behavior in KVM
but not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators