Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2018-09-30 Thread Satish Patel
I'm using openstack-ansible to deploy openstack on centos 7.5 ( we are 100% 
centos shop )

Sent from my iPhone

> On Sep 28, 2018, at 1:45 AM, Remo Mattei  wrote:
> 
> Are you using ubutu, TripleO??? 
> 
> Thanks 
> 
>> Il giorno 27 set 2018, alle ore 20:19, Satish Patel  
>> ha scritto:
>> 
>> I know this thread is old but still wanted to post my finding which
>> may help other folks to understand issue.
>> 
>> I am dealing with same issue in my openstack network, we are media
>> company and dealing with lots of VoIP applications where we need to
>> handle high stream of udp packets,  Virtio-net isn't meant to handl
>> high PPS rate, i ran couple of test and found no matter what txqueue
>> or multiqueue you set it will start dropping packet after 50kpps, I
>> have tried numa too but result is negative.
>> 
>> Finally i have decided to move and and try SR-IOV and now i am very
>> very happy, SR-IOV reduce my VM guest CPU load 50% and now my NIC can
>> handle 200kpps without dropping any packet.
>> 
>> I would say use "iptraf-ng" utility to find out packet rate and see if
>> its above ~40kpps.
>> On Fri, Jul 28, 2017 at 8:39 PM Eugene Nikanorov
>>  wrote:
>>> 
>>> John,
>>> 
>>> multiqueue support will require qemu 2.5+
>>> I wonder why do you need this feature. It only will help in case of a 
>>> really huge incoming pps or bandwidth.
>>> I'm not sure udp packet loss can be solved with this, but of course better 
>>> try.
>>> 
>>> my 2c.
>>> 
>>> Thanks,
>>> Eugene.
>>> 
> On Fri, Jul 28, 2017 at 5:00 PM, Liping Mao (limao)  
> wrote:
> 
> We already tune these values in the VM. Would you suggest tuning them on 
> the compute nodes as well?
 No need on compute nodes.(AFAIK)
 
 
 How much pps your vm need to handle?
 You can monitor CPU usage ,especially si to see where may drop. If you see 
 vhost almost reach to 100% CPU ,multi queue may help in some case.
 
 Thanks.
 
 Regards,
 Liping Mao
 
> 在 2017年7月28日,22:45,John Petrini  写道:
> 
> We already tune these values in the VM. Would you suggest tuning them on 
> the compute nodes as well?
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> 
>>> 
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2018-09-27 Thread Remo Mattei
Are you using ubutu, TripleO??? 

Thanks 

> Il giorno 27 set 2018, alle ore 20:19, Satish Patel  ha 
> scritto:
> 
> I know this thread is old but still wanted to post my finding which
> may help other folks to understand issue.
> 
> I am dealing with same issue in my openstack network, we are media
> company and dealing with lots of VoIP applications where we need to
> handle high stream of udp packets,  Virtio-net isn't meant to handl
> high PPS rate, i ran couple of test and found no matter what txqueue
> or multiqueue you set it will start dropping packet after 50kpps, I
> have tried numa too but result is negative.
> 
> Finally i have decided to move and and try SR-IOV and now i am very
> very happy, SR-IOV reduce my VM guest CPU load 50% and now my NIC can
> handle 200kpps without dropping any packet.
> 
> I would say use "iptraf-ng" utility to find out packet rate and see if
> its above ~40kpps.
> On Fri, Jul 28, 2017 at 8:39 PM Eugene Nikanorov
>  wrote:
>> 
>> John,
>> 
>> multiqueue support will require qemu 2.5+
>> I wonder why do you need this feature. It only will help in case of a really 
>> huge incoming pps or bandwidth.
>> I'm not sure udp packet loss can be solved with this, but of course better 
>> try.
>> 
>> my 2c.
>> 
>> Thanks,
>> Eugene.
>> 
>>> On Fri, Jul 28, 2017 at 5:00 PM, Liping Mao (limao)  wrote:
>>> 
 We already tune these values in the VM. Would you suggest tuning them on 
 the compute nodes as well?
>>> No need on compute nodes.(AFAIK)
>>> 
>>> 
>>> How much pps your vm need to handle?
>>> You can monitor CPU usage ,especially si to see where may drop. If you see 
>>> vhost almost reach to 100% CPU ,multi queue may help in some case.
>>> 
>>> Thanks.
>>> 
>>> Regards,
>>> Liping Mao
>>> 
 在 2017年7月28日,22:45,John Petrini  写道:
 
 We already tune these values in the VM. Would you suggest tuning them on 
 the compute nodes as well?
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2018-09-27 Thread Satish Patel
I know this thread is old but still wanted to post my finding which
may help other folks to understand issue.

I am dealing with same issue in my openstack network, we are media
company and dealing with lots of VoIP applications where we need to
handle high stream of udp packets,  Virtio-net isn't meant to handl
high PPS rate, i ran couple of test and found no matter what txqueue
or multiqueue you set it will start dropping packet after 50kpps, I
have tried numa too but result is negative.

Finally i have decided to move and and try SR-IOV and now i am very
very happy, SR-IOV reduce my VM guest CPU load 50% and now my NIC can
handle 200kpps without dropping any packet.

I would say use "iptraf-ng" utility to find out packet rate and see if
its above ~40kpps.
On Fri, Jul 28, 2017 at 8:39 PM Eugene Nikanorov
 wrote:
>
> John,
>
> multiqueue support will require qemu 2.5+
> I wonder why do you need this feature. It only will help in case of a really 
> huge incoming pps or bandwidth.
> I'm not sure udp packet loss can be solved with this, but of course better 
> try.
>
> my 2c.
>
> Thanks,
> Eugene.
>
> On Fri, Jul 28, 2017 at 5:00 PM, Liping Mao (limao)  wrote:
>>
>> > We already tune these values in the VM. Would you suggest tuning them on 
>> > the compute nodes as well?
>> No need on compute nodes.(AFAIK)
>>
>>
>> How much pps your vm need to handle?
>> You can monitor CPU usage ,especially si to see where may drop. If you see 
>> vhost almost reach to 100% CPU ,multi queue may help in some case.
>>
>> Thanks.
>>
>> Regards,
>> Liping Mao
>>
>> > 在 2017年7月28日,22:45,John Petrini  写道:
>> >
>> > We already tune these values in the VM. Would you suggest tuning them on 
>> > the compute nodes as well?
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Eugene Nikanorov
John,

multiqueue support will require qemu 2.5+
I wonder why do you need this feature. It only will help in case of a
really huge incoming pps or bandwidth.
I'm not sure udp packet loss can be solved with this, but of course better
try.

my 2c.

Thanks,
Eugene.

On Fri, Jul 28, 2017 at 5:00 PM, Liping Mao (limao)  wrote:

> > We already tune these values in the VM. Would you suggest tuning them on
> the compute nodes as well?
> No need on compute nodes.(AFAIK)
>
>
> How much pps your vm need to handle?
> You can monitor CPU usage ,especially si to see where may drop. If you see
> vhost almost reach to 100% CPU ,multi queue may help in some case.
>
> Thanks.
>
> Regards,
> Liping Mao
>
> > 在 2017年7月28日,22:45,John Petrini  写道:
> >
> > We already tune these values in the VM. Would you suggest tuning them on
> the compute nodes as well?
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Liping Mao (limao)
> We already tune these values in the VM. Would you suggest tuning them on the 
> compute nodes as well?
No need on compute nodes.(AFAIK)


How much pps your vm need to handle?
You can monitor CPU usage ,especially si to see where may drop. If you see 
vhost almost reach to 100% CPU ,multi queue may help in some case.

Thanks.

Regards,
Liping Mao

> 在 2017年7月28日,22:45,John Petrini  写道:
> 
> We already tune these values in the VM. Would you suggest tuning them on the 
> compute nodes as well?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread John Petrini
Hi Liping,

Thank you for the detailed response! I've gone over our environment and
checked the various values.

First I found that we are dropping packets on the physcial nics as well as
inside the instance (though only when its UDP receive buffer overflows).

Our physical nics are using the default ring size and our tap interfaces
are using the 500 tx_txdefault. There are dropped packets on the tap
interfaces but the counts are rather low and don't seem to increase very
often so I'm not sure that there's a problem there but I'm considering
adjusting the value anyway to avoid issues in the future.

We already tune these values in the VM. Would you suggest tuning them on
the compute nodes as well?
net.core.rmem_max / net.core.rmem_default / net.core.wmem_max /
net.core.rmem_default

I'm going to do some testing with multiqueues enabled since both you and
Saverio have suggested it.



John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Fri, Jul 28, 2017 at 9:25 AM, Erik McCormick 
wrote:

>
>
> On Jul 28, 2017 8:51 AM, "John Petrini"  wrote:
>
> Hi Saverio,
>
> Thanks for the info. The parameter is missing completely:
>
> 
>   
>   
>   
>   
>function='0x0'/>
> 
>
> I've came across the blueprint for adding the image property
> hw_vif_multiqueue_enabled. Do you know if this feature is available in
> Mitaka?
>
> It was merged 2 years ago so should have been there since Liberty.
>
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter]    [image: LinkedIn]
>    [image: Google Plus]
>    [image: Blog]
> 
>
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>
>
> On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:
>
>> Hello John,
>>
>> a common problem is packets being dropped when they pass from the
>> hypervisor to the instance. There is bottleneck there.
>>
>> check the 'virsh dumpxml' of one of the instances that is dropping
>> packets. Check for the interface section, should look like:
>>
>> 
>>   
>>   
>>   
>>   
>>   
>>   
>>   > function='0x0'/>
>> 
>>
>> how many queues you have ??? Usually if you have only 1 or if the
>> parameter is missing completely is not good.
>>
>> in Mitaka nova should use 1 queue for every instance CPU core you
>> have. It is worth to check if this is set correctly in your setup.
>>
>> Cheers,
>>
>> Saverio
>>
>>
>>
>> 2017-07-27 17:49 GMT+02:00 John Petrini :
>> > Hi List,
>> >
>> > We are running Mitaka with VLAN provider networking. We've recently
>> > encountered a problem where the UDP receive queue on instances is
>> filling up
>> > and we begin dropping packets. Moving instances out of OpenStack onto
>> bare
>> > metal resolves the issue completely.
>> >
>> > These instances are running asterisk which should be pulling these
>> packets
>> > off the queue but it appears to be falling behind no matter the
>> resources we
>> > give it.
>> >
>> > We can't seem to pin down a reason why we would see this behavior in
>> KVM but
>> > not on metal. I'm hoping someone on the list might have some insight or
>> > ideas.
>> >
>> > Thank You,
>> >
>> > John
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > openstack-operat...@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>>
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread Erik McCormick
On Jul 28, 2017 8:51 AM, "John Petrini"  wrote:

Hi Saverio,

Thanks for the info. The parameter is missing completely:


  
  
  
  
  


I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled. Do you know if this feature is available in
Mitaka?

It was merged 2 years ago so should have been there since Liberty.


John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]


751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
<(215)%20297-4401>   //   *E: *jpetr...@coredial.com


On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto  wrote:

> Hello John,
>
> a common problem is packets being dropped when they pass from the
> hypervisor to the instance. There is bottleneck there.
>
> check the 'virsh dumpxml' of one of the instances that is dropping
> packets. Check for the interface section, should look like:
>
> 
>   
>   
>   
>   
>   
>   
>function='0x0'/>
> 
>
> how many queues you have ??? Usually if you have only 1 or if the
> parameter is missing completely is not good.
>
> in Mitaka nova should use 1 queue for every instance CPU core you
> have. It is worth to check if this is set correctly in your setup.
>
> Cheers,
>
> Saverio
>
>
>
> 2017-07-27 17:49 GMT+02:00 John Petrini :
> > Hi List,
> >
> > We are running Mitaka with VLAN provider networking. We've recently
> > encountered a problem where the UDP receive queue on instances is
> filling up
> > and we begin dropping packets. Moving instances out of OpenStack onto
> bare
> > metal resolves the issue completely.
> >
> > These instances are running asterisk which should be pulling these
> packets
> > off the queue but it appears to be falling behind no matter the
> resources we
> > give it.
> >
> > We can't seem to pin down a reason why we would see this behavior in KVM
> but
> > not on metal. I'm hoping someone on the list might have some insight or
> > ideas.
> >
> > Thank You,
> >
> > John
> >
> > ___
> > OpenStack-operators mailing list
> > openstack-operat...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2017-07-27 Thread Liping Mao (limao)
I get my message has been automatically rejected by the 
openstack-operators-ow...@lists.openstack.org<mailto:openstack-operators-ow...@lists.openstack.org>.
Resend.



Hi John,

Did you know where the packet dropped? On physical interface / tap device / ovs 
port or in the vm.

We hit udp packet loss when the large pps. The following things you may double 
check:

1.  Double check if your physical interface is dropping packet.  Usually if 
you rx queue ring size or rx queue number is default value ,it will drop udp 
packet . It will start to drop packet if it reach to about 200kpps in one cpu 
core(rss will distribute traffic to different core, for one single core, it 
will drop packet about 200kpps in my exp).

Usually you can get the statics from ethtool -S interface to check if there is 
packet loss because of rx queue full. And use ethtool to increase your ring 
size. I tested in my environment that if ring size increase from 512 to 4096, 
it can double the throughput from 200kpps to 400kpps in one cpu core. This may 
help in some case.



2.  Double check if your TAP device dropped packet, the default tx_queue 
length is 500 or 1000, increase it to 1 may help in some case.


3.  Double check your nf_conntrack_max in compute node and network node, 
the default value is 65535, in our case it usually reach to 500k-1m . we change 
it as following:
net.netfilter.nf_conntrack_max=1024
net.nf_conntrack_max=1024
if you see , something like “nf_conntrack: table full, dropping packet” in your 
/var/log/message log, that means you hit this one.


4.  You could check if drop happened inside your vm, increase the following 
param maybe help in some case:

net.core.rmem_max / net.core.rmem_default / net.core.wmem_max / 
net.core.rmem_default


5.  If you are using default network driver(virtio-net), you can double 
check if your vhost of your vm is full with CPU soft irq. You can find it by 
the process name is vhost-$PID_OF_YOUR_VM . In this case, if you can try the 
following feature in “L”:

https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html
  multi-queue may help you some case, but it will use more vhost and more 
cpu in your host.


6.  Sometimes cpu numa pin can also help, but you need to reserve them and 
static plan you cpu.

I think we should figure out the packet lost in where and which is the 
bottleneck. Hope this help,  John.
Thanks.

Regards,
Liping Mao

发件人: John Petrini <jpetr...@coredial.com>
日期: 2017年7月28日 星期五 03:35
至: Pedro Sousa <pgso...@gmail.com>, OpenStack Mailing List 
<openstack@lists.openstack.org>, "openstack-operat...@lists.openstack.org" 
<openstack-operat...@lists.openstack.org>
主题: Re: [Openstack] [Openstack-operators] UDP Buffer Filling

Hi Pedro,

Thank you for the suggestion. I will look into this.


John Petrini

Platforms Engineer   //   CoreDial, LLC   //   
coredial.com<http://coredial.com/>   //   [itter] 
<https://twitter.com/coredial>[nkedIn] 
<http://www.linkedin.com/company/99631>[ogle Plus] 
<https://plus.google.com/104062177220750809525/posts>[og] 
<http://success.coredial.com/blog>
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232   //   F: 215.297.4401   //   E: 
jpetr...@coredial.com<mailto:jpetr...@coredial.com>

On Thu, Jul 27, 2017 at 12:25 PM, Pedro Sousa 
<pgso...@gmail.com<mailto:pgso...@gmail.com>> wrote:
Hi,

have you considered to implement some network acceleration technique like to 
OVS-DPDK or SR-IOV?

In these kind of workloads (voice, video) that have low latency requirements 
you might need to use something like DPDK to avoid these issues.

Regards

On Thu, Jul 27, 2017 at 4:49 PM, John Petrini 
<jpetr...@coredial.com<mailto:jpetr...@coredial.com>> wrote:
Hi List,

We are running Mitaka with VLAN provider networking. We've recently encountered 
a problem where the UDP receive queue on instances is filling up and we begin 
dropping packets. Moving instances out of OpenStack onto bare metal resolves 
the issue completely.

These instances are running asterisk which should be pulling these packets off 
the queue but it appears to be falling behind no matter the resources we give 
it.

We can't seem to pin down a reason why we would see this behavior in KVM but 
not on metal. I'm hoping someone on the list might have some insight or ideas.

Thank You,

John

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org<mailto:openstack-operat...@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2017-07-27 Thread Liping Mao (limao)
Hi John,

Did you know where the packet dropped? On physical interface / tap device / ovs 
port or in the vm.

We hit udp packet loss when the large pps. The following things you may double 
check:

1.   Double check if your physical interface is dropping packet.  Usually 
if you rx queue ring size or rx queue number is default value ,it will drop udp 
packet . It will start to drop packet if it reach to about 200kpps in one cpu 
core(rss will distribute traffic to different core, for one single core, it 
will drop packet about 200kpps in my exp).

Usually you can get the statics from ethtool -S interface to check if there is 
packet loss because of rx queue full. And use ethtool to increase your ring 
size. I tested in my environment that if ring size increase from 512 to 4096, 
it can double the throughput from 200kpps to 400kpps in one cpu core. This may 
help in some case.



2.   Double check if your TAP device dropped packet, the default tx_queue 
length is 500 or 1000, increase it to 1 may help in some case.


3.   Double check your nf_conntrack_max in compute node and network node, 
the default value is 65535, in our case it usually reach to 500k-1m . we change 
it as following:
net.netfilter.nf_conntrack_max=1024
net.nf_conntrack_max=1024
if you see , something like “nf_conntrack: table full, dropping packet” in your 
/var/log/message log, that means you hit this one.


4.   You could check if drop happened inside your vm, increase the 
following param maybe help in some case:

net.core.rmem_max / net.core.rmem_default / net.core.wmem_max / 
net.core.rmem_default


5.   If you are using default network driver(virtio-net), you can double 
check if your vhost of your vm is full with CPU soft irq. You can find it by 
the process name is vhost-$PID_OF_YOUR_VM . In this case, if you can try the 
following feature in “L”:

https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html
  multi-queue may help you some case, but it will use more vhost and more 
cpu in your host.


6.   Sometimes cpu numa pin can also help, but you need to reserve them and 
static plan you cpu.

I think we should figure out the packet lost in where and which is the 
bottleneck. Hope this help,  John.
Thanks.

Regards,
Liping Mao

发件人: John Petrini <jpetr...@coredial.com>
日期: 2017年7月28日 星期五 03:35
至: Pedro Sousa <pgso...@gmail.com>, OpenStack Mailing List 
<openstack@lists.openstack.org>, "openstack-operat...@lists.openstack.org" 
<openstack-operat...@lists.openstack.org>
主题: Re: [Openstack] [Openstack-operators] UDP Buffer Filling

Hi Pedro,

Thank you for the suggestion. I will look into this.


John Petrini

Platforms Engineer   //   CoreDial, LLC   //   
coredial.com<http://coredial.com/>   //   [witter] 
<https://twitter.com/coredial>[inkedIn] 
<http://www.linkedin.com/company/99631>[oogle Plus] 
<https://plus.google.com/104062177220750809525/posts>[log] 
<http://success.coredial.com/blog>
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232   //   F: 215.297.4401   //   E: 
jpetr...@coredial.com<mailto:jpetr...@coredial.com>

On Thu, Jul 27, 2017 at 12:25 PM, Pedro Sousa 
<pgso...@gmail.com<mailto:pgso...@gmail.com>> wrote:
Hi,

have you considered to implement some network acceleration technique like to 
OVS-DPDK or SR-IOV?

In these kind of workloads (voice, video) that have low latency requirements 
you might need to use something like DPDK to avoid these issues.

Regards

On Thu, Jul 27, 2017 at 4:49 PM, John Petrini 
<jpetr...@coredial.com<mailto:jpetr...@coredial.com>> wrote:
Hi List,

We are running Mitaka with VLAN provider networking. We've recently encountered 
a problem where the UDP receive queue on instances is filling up and we begin 
dropping packets. Moving instances out of OpenStack onto bare metal resolves 
the issue completely.

These instances are running asterisk which should be pulling these packets off 
the queue but it appears to be falling behind no matter the resources we give 
it.

We can't seem to pin down a reason why we would see this behavior in KVM but 
not on metal. I'm hoping someone on the list might have some insight or ideas.

Thank You,

John

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org<mailto:openstack-operat...@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Openstack-operators] UDP Buffer Filling

2017-07-27 Thread John Petrini
Hi Pedro,

Thank you for the suggestion. I will look into this.

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Thu, Jul 27, 2017 at 12:25 PM, Pedro Sousa  wrote:

> Hi,
>
> have you considered to implement some network acceleration technique like
> to OVS-DPDK or SR-IOV?
>
> In these kind of workloads (voice, video) that have low latency
> requirements you might need to use something like DPDK to avoid these
> issues.
>
> Regards
>
> On Thu, Jul 27, 2017 at 4:49 PM, John Petrini 
> wrote:
>
>> Hi List,
>>
>> We are running Mitaka with VLAN provider networking. We've recently
>> encountered a problem where the UDP receive queue on instances is filling
>> up and we begin dropping packets. Moving instances out of OpenStack onto
>> bare metal resolves the issue completely.
>>
>> These instances are running asterisk which should be pulling these
>> packets off the queue but it appears to be falling behind no matter the
>> resources we give it.
>>
>> We can't seem to pin down a reason why we would see this behavior in KVM
>> but not on metal. I'm hoping someone on the list might have some insight or
>> ideas.
>>
>> Thank You,
>>
>> John
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack