Re: [Openstack-operators] [Openstack] Recovering from full outage

2018-07-12 Thread John Petrini
You might want to try giving the neutron-dhcp and metadata agents a restart.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack] Recovering from full outage

2018-07-12 Thread John Petrini
Are you instances receiving a route to the metadata service
(169.254.169.254) from DHCP? Can you curl the endpoint? curl
http://169.254.169.254/latest/meta-data
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack] MTU on Provider Networks

2017-09-18 Thread John Petrini
Great! Thank you both for the information.

John Petrini
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] MTU on Provider Networks

2017-09-14 Thread John Petrini
Hi List,

We are running Mitaka and having an MTU issue. Instances that we launch on
our provider network use Jumbo Frames (9000 MTU). There is a Layer2 link
between the OpenStack switches and our Core. This link uses and MTU of
1500.

Up until recently this MTU mismatch has not been an issue because none of
our systems are sending large enough packets to cause a problem. Recently
we've begun implementing a SIP device that sends very large packets,
sometimes even over 9000 bytes and requires fragmentation.

What we found in our troubleshooting is that when large packets originate
from our network to an instance in OpenStack they are being fragmented (as
expected). Once these packets reach the qbr-XX port iptables
defragments the packet and forwards it to the tap interface unfragmented.
If we set the MTU on the tap interface to 1500 it will refragment the
packet before forwarding it to the instance.

A similar issue happens the other direction. Large packets originating from
the OpenStack instance are fragmented (we set the mtu of the interface in
the instance to 1500 so this is expected) but again once the packets reach
the qbr--XX interface iptables defragments them again. If we set
the MTU of the qvb-XX to 1500 the packet is refragmented.

So long story short if we set the instance MTU to 1500 and the
qbr-XX and qvb-XX ports on the compute node to 1500 MTU the
packets remain fragmented and are able to traverse the network.

So the question becomes can we modify the default MTU of our provider
networks so that the instances created on this network receive a 1500 MTU
from DHCP and the ports on the compute node are also configured to a 1500
MTU?

I've been looking at the following neutron config option in
/etc/neutron/plugins/ml2/ml2_conf.ini:

physical_network_mtus =physnet1:9000,providernet:9000

Documentation on this setting is not very clear. Will adjusting this to
1500 for providernet accomplish what we need?

Thank You,

John Petrini
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Pacemaker / Corosync in guests on OpenStack

2017-08-16 Thread John Petrini
I just did recently and had no issues. I used a provider network so I don't
have experience using it with project networks but I believe the only issue
you might run into with project networks is multicast. You can work around
this by using unicast instead.

If you do you use multicast you need to enable IGMP in your security
groups. You can do this in Horizon by selecting other protocol and setting
the IP protocol number to 2.

I hit a minor issue setting up a VIP because port security wouldn't allow
traffic to the instance that was destined for that address but all I had to
do was add the VIP as an allowed address pair on the port of each instance.
Also, I attached an additional interface to one of the instances to
allocate the VIP, I just didn't configure the interface within the
instance. Since we use DHCP this was a simple way to reserve the IP. I'm
sure I could have created a pacemaker resource that would move the port
using the OpenStack API but I prefer the simplicity and speed of Pacemakers
ocf:ipaddr2 resource.

I setup fencing of the instances via the openstack api to avoid any chance
of a duplicate IP when moving the VIP. I borrowed this script
https://github.com/beekhof/fence_openstack/blob/master/fence_openstack and
made a few minor changes.

Overall there weren't many differences between setting up pacemaker in
OpenStack vs Iron but I hope this is helpful.


Regards,

John Petrini


On Wed, Aug 16, 2017 at 6:06 AM, Tim Bell <tim.b...@cern.ch> wrote:

>
>
> Has anyone had experience setting up a cluster of VM guests running
> Pacemaker / Corosync? Any recommendations?
>
>
>
> Tim
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Thanks for the info. Might have something to do with the Ceph version then.
We're running hammer and apparently the du option wasn't added until
in Infernalis.

John Petrini

On Tue, Aug 1, 2017 at 4:32 PM, Mike Lowe <joml...@iu.edu> wrote:

> Two things, first info does not show how much disk is used du does.
> Second, the semantics count, copy is different than clone and flatten.
> Clone and flatten which should happen if you have things working correctly
> is much faster than copy.  If you are using copy then you may be limited by
> the number of management ops in flight, this is a setting for more recent
> versions of ceph.  I don’t know if copy skips zero byte objects but clone
> and flatten certainly do.  You need to be sure that you have the proper
> settings in nova.conf for discard/unmap as well as using
> hw_scsi_model=virtio-scsi and hw_disk_bus=scsi in the image properties.
> Once discard is working and you have the qemu guest agent running in your
> instances you can force them to do a fstrim to reclaim space as an
> additional benefit.
>
> On Aug 1, 2017, at 3:50 PM, John Petrini <jpetr...@coredial.com> wrote:
>
> Maybe I'm just not understanding but when I create a nova snapshot the
> snapshot happens at RBD in the ephemeral pool and then it's copied to the
> images pool. This results in a full sized image rather than a snapshot with
> a reference to the parent.
>
> For example below is a snapshot of an ephemeral instance from our images
> pool. It's 80GB, the size of the instance, so rather than just capturing
> the state of the parent image I end up with a brand new image of the same
> size. It takes a long time to create this copy and causes high IO during
> the snapshot.
>
> rbd --pool images info d5404709-cb86-4743-b3d5-1dc7fba836c1
> rbd image 'd5404709-cb86-4743-b3d5-1dc7fba836c1':
> size 81920 MB in 20480 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.93cdd43ca5efa8
> format: 2
> features: layering, striping
> flags:
> stripe unit: 4096 kB
> stripe count: 1
>
>
> John Petrini
>
> On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe <joml...@iu.edu> wrote:
>
>> There is no upload if you use Ceph to back your glance (like you should),
>> the snapshot is cloned from the ephemeral pool into the the images pool,
>> then flatten is run as a background task.  Net result is that creating a
>> 120GB image vs 8GB is slightly faster on my cloud but not at all what I’d
>> call painful.
>>
>> Running nova image-create for a 8GB image:
>>
>> real 0m2.712s
>> user 0m0.761s
>> sys 0m0.225s
>>
>> Running nova image-create for a 128GB image:
>>
>> real 0m2.436s
>> user 0m0.774s
>> sys 0m0.225s
>>
>>
>>
>>
>> On Aug 1, 2017, at 3:07 PM, John Petrini <jpetr...@coredial.com> wrote:
>>
>> Yes from Mitaka onward the snapshot happens at the RBD level which is
>> fast. It's the flattening and uploading of the image to glance that's the
>> major pain point. Still it's worlds better than the qemu snapshots to the
>> local disk prior to Mitaka.
>>
>> John Petrini
>>
>> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
>> Twitter] <https://twitter.com/coredial>   [image: LinkedIn]
>> <http://www.linkedin.com/company/99631>   [image: Google Plus]
>> <https://plus.google.com/104062177220750809525/posts>   [image: Blog]
>> <http://success.coredial.com/blog>
>> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
>> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
>> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>> <https://t.xink.io/Tracking/Index/a64BAGORAACukSoA0>
>>
>> The information transmitted is intended only for the person or entity to
>> which it is addressed and may contain confidential and/or privileged
>> material. Any review, retransmission,  dissemination or other use of, or
>> taking of any action in reliance upon, this information by persons or
>> entities other than the intended recipient is prohibited. If you received
>> this in error, please contact the sender and delete the material from any
>> computer.
>>
>>
>>
>> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe <joml...@iu.edu> wrote:
>>
>>> Strictly speaking I don’t think this is the case anymore for Mitaka or
>>> later.  Snapping nova does take more space as the image is flattened, but
>>> the dumb download then upload back into ceph has been cut out.  With
>>> careful attention paid to discard/TRIM I believe you can maintain the thin
>>

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Maybe I'm just not understanding but when I create a nova snapshot the
snapshot happens at RBD in the ephemeral pool and then it's copied to the
images pool. This results in a full sized image rather than a snapshot with
a reference to the parent.

For example below is a snapshot of an ephemeral instance from our images
pool. It's 80GB, the size of the instance, so rather than just capturing
the state of the parent image I end up with a brand new image of the same
size. It takes a long time to create this copy and causes high IO during
the snapshot.

rbd --pool images info d5404709-cb86-4743-b3d5-1dc7fba836c1
rbd image 'd5404709-cb86-4743-b3d5-1dc7fba836c1':
size 81920 MB in 20480 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.93cdd43ca5efa8
format: 2
features: layering, striping
flags:
stripe unit: 4096 kB
stripe count: 1


John Petrini

On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe <joml...@iu.edu> wrote:

> There is no upload if you use Ceph to back your glance (like you should),
> the snapshot is cloned from the ephemeral pool into the the images pool,
> then flatten is run as a background task.  Net result is that creating a
> 120GB image vs 8GB is slightly faster on my cloud but not at all what I’d
> call painful.
>
> Running nova image-create for a 8GB image:
>
> real 0m2.712s
> user 0m0.761s
> sys 0m0.225s
>
> Running nova image-create for a 128GB image:
>
> real 0m2.436s
> user 0m0.774s
> sys 0m0.225s
>
>
>
>
> On Aug 1, 2017, at 3:07 PM, John Petrini <jpetr...@coredial.com> wrote:
>
> Yes from Mitaka onward the snapshot happens at the RBD level which is
> fast. It's the flattening and uploading of the image to glance that's the
> major pain point. Still it's worlds better than the qemu snapshots to the
> local disk prior to Mitaka.
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter] <https://twitter.com/coredial>   [image: LinkedIn]
> <http://www.linkedin.com/company/99631>   [image: Google Plus]
> <https://plus.google.com/104062177220750809525/posts>   [image: Blog]
> <http://success.coredial.com/blog>
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
> <https://t.xink.io/Tracking/Index/a64BAGORAACukSoA0>
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission,  dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
>
>
> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe <joml...@iu.edu> wrote:
>
>> Strictly speaking I don’t think this is the case anymore for Mitaka or
>> later.  Snapping nova does take more space as the image is flattened, but
>> the dumb download then upload back into ceph has been cut out.  With
>> careful attention paid to discard/TRIM I believe you can maintain the thin
>> provisioning properties of RBD.  The workflow is explained here.
>> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova
>> -snapshots-on-ceph-rbd/
>>
>> On Aug 1, 2017, at 11:14 AM, John Petrini <jpetr...@coredial.com> wrote:
>>
>> Just my two cents here but we started out using mostly Ephemeral storage
>> in our builds and looking back I wish we hadn't. Note we're using Ceph as a
>> backend so my response is tailored towards Ceph's behavior.
>>
>> The major pain point is snapshots. When you snapshot an nova volume an
>> RBD snapshot occurs and is very quick and uses very little additional
>> storage, however the snapshot is then copied into the images pool and in
>> the process is converted from a snapshot to a full size image. This takes a
>> long time because you have to copy a lot of data and it takes up a lot of
>> space. It also causes a great deal of IO on the storage and means you end
>> up with a bunch of "snapshot images" creating clutter. On the other hand
>> volume snapshots are near instantaneous without the other drawbacks I've
>> mentioned.
>>
>> On the plus side for ephemeral storage; resizing the root disk of images
>> works better. As long as your image is configured properly it's just a
>> matter of initiating a resize and letting the instance reboot to grow the
>> root disk. When using volumes as your root disk you instead have to
>> s

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Yes from Mitaka onward the snapshot happens at the RBD level which is fast.
It's the flattening and uploading of the image to glance that's the major
pain point. Still it's worlds better than the qemu snapshots to the local
disk prior to Mitaka.

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter] <https://twitter.com/coredial>   [image: LinkedIn]
<http://www.linkedin.com/company/99631>   [image: Google Plus]
<https://plus.google.com/104062177220750809525/posts>   [image: Blog]
<http://success.coredial.com/blog>
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com
<https://t.xink.io/Tracking/Index/a64BAGORAACukSoA0>

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission,  dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.



On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe <joml...@iu.edu> wrote:

> Strictly speaking I don’t think this is the case anymore for Mitaka or
> later.  Snapping nova does take more space as the image is flattened, but
> the dumb download then upload back into ceph has been cut out.  With
> careful attention paid to discard/TRIM I believe you can maintain the thin
> provisioning properties of RBD.  The workflow is explained here.
> https://www.sebastien-han.fr/blog/2015/10/05/openstack-
> nova-snapshots-on-ceph-rbd/
>
> On Aug 1, 2017, at 11:14 AM, John Petrini <jpetr...@coredial.com> wrote:
>
> Just my two cents here but we started out using mostly Ephemeral storage
> in our builds and looking back I wish we hadn't. Note we're using Ceph as a
> backend so my response is tailored towards Ceph's behavior.
>
> The major pain point is snapshots. When you snapshot an nova volume an RBD
> snapshot occurs and is very quick and uses very little additional storage,
> however the snapshot is then copied into the images pool and in the process
> is converted from a snapshot to a full size image. This takes a long time
> because you have to copy a lot of data and it takes up a lot of space. It
> also causes a great deal of IO on the storage and means you end up with a
> bunch of "snapshot images" creating clutter. On the other hand volume
> snapshots are near instantaneous without the other drawbacks I've mentioned.
>
> On the plus side for ephemeral storage; resizing the root disk of images
> works better. As long as your image is configured properly it's just a
> matter of initiating a resize and letting the instance reboot to grow the
> root disk. When using volumes as your root disk you instead have to
> shutdown the instance, grow the volume and boot.
>
> I hope this help! If anyone on the list knows something I don't know
> regarding these issues please chime in. I'd love to know if there's a
> better way.
>
> Regards,
>
> John Petrini
>
> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad <
> conrad.kimb...@boeing.com> wrote:
>
>> In our process of standing up an OpenStack internal cloud we are facing
>> the question of ephemeral storage vs. Cinder volumes for instance root
>> disks.
>>
>>
>>
>> As I look at public clouds such as AWS and Azure, the norm is to use
>> persistent volumes for the root disk.  AWS started out with images booting
>> onto ephemeral disk, but soon after they released Elastic Block Storage and
>> ever since the clear trend has been to EBS-backed instances, and now when I
>> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And
>> I’m not even sure one can have anything except persistent root disks in
>> Azure VMs.
>>
>>
>>
>> Based on this and a number of other factors I think we want our user
>> normal / default behavior to boot onto Cinder-backed volumes instead of
>> onto ephemeral storage.  But then I look at OpenStack and its design point
>> appears to be booting images onto ephemeral storage, and while it is
>> possible to boot an image onto a new volume this is clumsy (haven’t found a
>> way to make this the default behavior) and we are experiencing performance
>> problems (that admittedly we have not yet run to ground).
>>
>>
>>
>> So …
>>
>> · Are other operators routinely booting onto Cinder volumes
>> instead of ephemeral storage?
>>
>> · What has been your experience with this; any adv

Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread John Petrini
Hi Liping,

Thank you for the detailed response! I've gone over our environment and
checked the various values.

First I found that we are dropping packets on the physcial nics as well as
inside the instance (though only when its UDP receive buffer overflows).

Our physical nics are using the default ring size and our tap interfaces
are using the 500 tx_txdefault. There are dropped packets on the tap
interfaces but the counts are rather low and don't seem to increase very
often so I'm not sure that there's a problem there but I'm considering
adjusting the value anyway to avoid issues in the future.

We already tune these values in the VM. Would you suggest tuning them on
the compute nodes as well?
net.core.rmem_max / net.core.rmem_default / net.core.wmem_max /
net.core.rmem_default

I'm going to do some testing with multiqueues enabled since both you and
Saverio have suggested it.



John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter] <https://twitter.com/coredial>   [image: LinkedIn]
<http://www.linkedin.com/company/99631>   [image: Google Plus]
<https://plus.google.com/104062177220750809525/posts>   [image: Blog]
<http://success.coredial.com/blog>
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Fri, Jul 28, 2017 at 9:25 AM, Erik McCormick <emccorm...@cirrusseven.com>
wrote:

>
>
> On Jul 28, 2017 8:51 AM, "John Petrini" <jpetr...@coredial.com> wrote:
>
> Hi Saverio,
>
> Thanks for the info. The parameter is missing completely:
>
> 
>   
>   
>   
>   
>function='0x0'/>
> 
>
> I've came across the blueprint for adding the image property
> hw_vif_multiqueue_enabled. Do you know if this feature is available in
> Mitaka?
>
> It was merged 2 years ago so should have been there since Liberty.
>
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter] <https://twitter.com/coredial>   [image: LinkedIn]
> <http://www.linkedin.com/company/99631>   [image: Google Plus]
> <https://plus.google.com/104062177220750809525/posts>   [image: Blog]
> <http://success.coredial.com/blog>
>
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>
>
> On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto <ziopr...@gmail.com> wrote:
>
>> Hello John,
>>
>> a common problem is packets being dropped when they pass from the
>> hypervisor to the instance. There is bottleneck there.
>>
>> check the 'virsh dumpxml' of one of the instances that is dropping
>> packets. Check for the interface section, should look like:
>>
>> 
>>   
>>   
>>   
>>   
>>   
>>   
>>   > function='0x0'/>
>> 
>>
>> how many queues you have ??? Usually if you have only 1 or if the
>> parameter is missing completely is not good.
>>
>> in Mitaka nova should use 1 queue for every instance CPU core you
>> have. It is worth to check if this is set correctly in your setup.
>>
>> Cheers,
>>
>> Saverio
>>
>>
>>
>> 2017-07-27 17:49 GMT+02:00 John Petrini <jpetr...@coredial.com>:
>> > Hi List,
>> >
>> > We are running Mitaka with VLAN provider networking. We've recently
>> > encountered a problem where the UDP receive queue on instances is
>> filling up
>> > and we begin dropping packets. Moving instances out of OpenStack onto
>> bare
>> > metal resolves the issue completely.
>> >
>> > These instances are running asterisk which should be pulling these
>> packets
>> > off the queue but it appears to be falling behind no matter the
>> resources we
>> > give it.
>> >
>> > We can't seem to pin down a reason why we would see this behavior in
>> KVM but
>> > not on metal. I'm hoping someone on the list might have some insight or
>> > ideas.
>> >
>> > Thank You,
>> >
>> > John
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-28 Thread John Petrini
Hi Saverio,

Thanks for the info. The parameter is missing completely:


  
  
  
  
  


I've came across the blueprint for adding the image property
hw_vif_multiqueue_enabled.
Do you know if this feature is available in Mitaka?

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter] <https://twitter.com/coredial>   [image: LinkedIn]
<http://www.linkedin.com/company/99631>   [image: Google Plus]
<https://plus.google.com/104062177220750809525/posts>   [image: Blog]
<http://success.coredial.com/blog>
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto <ziopr...@gmail.com> wrote:

> Hello John,
>
> a common problem is packets being dropped when they pass from the
> hypervisor to the instance. There is bottleneck there.
>
> check the 'virsh dumpxml' of one of the instances that is dropping
> packets. Check for the interface section, should look like:
>
> 
>   
>   
>   
>   
>   
>   
>function='0x0'/>
> 
>
> how many queues you have ??? Usually if you have only 1 or if the
> parameter is missing completely is not good.
>
> in Mitaka nova should use 1 queue for every instance CPU core you
> have. It is worth to check if this is set correctly in your setup.
>
> Cheers,
>
> Saverio
>
>
>
> 2017-07-27 17:49 GMT+02:00 John Petrini <jpetr...@coredial.com>:
> > Hi List,
> >
> > We are running Mitaka with VLAN provider networking. We've recently
> > encountered a problem where the UDP receive queue on instances is
> filling up
> > and we begin dropping packets. Moving instances out of OpenStack onto
> bare
> > metal resolves the issue completely.
> >
> > These instances are running asterisk which should be pulling these
> packets
> > off the queue but it appears to be falling behind no matter the
> resources we
> > give it.
> >
> > We can't seem to pin down a reason why we would see this behavior in KVM
> but
> > not on metal. I'm hoping someone on the list might have some insight or
> > ideas.
> >
> > Thank You,
> >
> > John
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UDP Buffer Filling

2017-07-27 Thread John Petrini
Hi Pedro,

Thank you for the suggestion. I will look into this.

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter] <https://twitter.com/coredial>   [image: LinkedIn]
<http://www.linkedin.com/company/99631>   [image: Google Plus]
<https://plus.google.com/104062177220750809525/posts>   [image: Blog]
<http://success.coredial.com/blog>
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

On Thu, Jul 27, 2017 at 12:25 PM, Pedro Sousa <pgso...@gmail.com> wrote:

> Hi,
>
> have you considered to implement some network acceleration technique like
> to OVS-DPDK or SR-IOV?
>
> In these kind of workloads (voice, video) that have low latency
> requirements you might need to use something like DPDK to avoid these
> issues.
>
> Regards
>
> On Thu, Jul 27, 2017 at 4:49 PM, John Petrini <jpetr...@coredial.com>
> wrote:
>
>> Hi List,
>>
>> We are running Mitaka with VLAN provider networking. We've recently
>> encountered a problem where the UDP receive queue on instances is filling
>> up and we begin dropping packets. Moving instances out of OpenStack onto
>> bare metal resolves the issue completely.
>>
>> These instances are running asterisk which should be pulling these
>> packets off the queue but it appears to be falling behind no matter the
>> resources we give it.
>>
>> We can't seem to pin down a reason why we would see this behavior in KVM
>> but not on metal. I'm hoping someone on the list might have some insight or
>> ideas.
>>
>> Thank You,
>>
>> John
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] UDP Buffer Filling

2017-07-27 Thread John Petrini
Hi List,

We are running Mitaka with VLAN provider networking. We've recently
encountered a problem where the UDP receive queue on instances is filling
up and we begin dropping packets. Moving instances out of OpenStack onto
bare metal resolves the issue completely.

These instances are running asterisk which should be pulling these packets
off the queue but it appears to be falling behind no matter the resources
we give it.

We can't seem to pin down a reason why we would see this behavior in KVM
but not on metal. I'm hoping someone on the list might have some insight or
ideas.

Thank You,

John
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cannot create provider network

2017-05-17 Thread John Petrini
How about mysql.


select * from neutron.networks;


On Wed, May 17, 2017 at 11:14 AM, <si...@turka.nl> wrote:

> That is also what I thought, but unfortunately, there is no provider
> network.
>
> The output of neutron net-list is empty.
>
> > It sounds like there's already a provider network. What's the output of
> > neutron net-list?
> >
> > ___
> >
> > John Petrini
> >
> > NOC Systems Administrator   //   *CoreDial, LLC*   //   coredial.com
> > //   [image:
> > Twitter] <https://twitter.com/coredial>   [image: LinkedIn]
> > <http://www.linkedin.com/company/99631>   [image: Google Plus]
> > <https://plus.google.com/104062177220750809525/posts>   [image: Blog]
> > <http://success.coredial.com/blog>
> > Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422
> > *P: *215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
> > jpetr...@coredial.com
> >
> >
> > Interested in sponsoring PartnerConnex 2017? Learn more.
> > <http://success.coredial.com/partnerconnex-2017-sponsorship>
> >
> > The information transmitted is intended only for the person or entity to
> > which it is addressed and may contain confidential and/or privileged
> > material. Any review, retransmission,  dissemination or other use of, or
> > taking of any action in reliance upon, this information by persons or
> > entities other than the intended recipient is prohibited. If you received
> > this in error, please contact the sender and delete the material from any
> > computer.
> >
> > On Wed, May 17, 2017 at 11:00 AM, <si...@turka.nl> wrote:
> >
> >> Hello,
> >>
> >> I have followed the installation documentation at
> >> https://docs.openstack.org/newton/install-guide-rdo.
> >>
> >> Everything went fine.
> >>
> >> As stated at
> >> https://docs.openstack.org/newton/install-guide-rdo/
> >> launch-instance-networks-provider.html#launch-instance-
> networks-provider
> >> I tried to create a provider network. I succeeded with this, but I
> >> wanted
> >> to make some changes, so I have deleted it and tried to recreate it, but
> >> it fails with the following:
> >>
> >> [root@controller ~]# openstack network create  --share --external \
> >> >   --provider-physical-network provider \
> >> >   --provider-network-type flat provider
> >> HttpException: Conflict
> >> [root@controller ~]#
> >>
> >> When I try the same with the neutron command, I am getting the
> >> following:
> >>
> >> [root@controller ~]# neutron net-create --shared
> >> --provider:physical_network provider \
> >> >   --provider:network_type flat provider
> >> Unable to create the flat network. Physical network provider is in use.
> >> Neutron server returns request_ids:
> >> ['req-169949d5-56c6-44e8-b5c7-d251dcf98fc2']
> >> [root@controller ~]#
> >>
> >> Any suggestions?
> >>
> >> Thanks!
> >>
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> OpenStack-operators@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack] Slow internet speed - Instances with floating IP

2017-04-04 Thread John Petrini
Hi All,

Turns out this was a DNS issue. The speedtest-cli script does a lot of
lookups and these lookups were hanging due to parallel A and  requests
resulting in the unexpected results. I added the following to
/etc/resolv.conf and the issue seems to be resolved.

options single-request-reopen

Thanks,

___

John Petrini
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Slow internet speed - Instances with floating IP

2017-04-04 Thread John Petrini
Adding the OpenStack list in hopes this might get some attention there.

Thanks All,

___

John Petrini
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators